initial support for vector indexes
MDEV-33407 Parser support for vector indexes
The syntax is
create table t1 (... vector index (v) ...);
limitation:
* v is a binary string and NOT NULL
* only one vector index per table
* temporary tables are not supported
MDEV-33404 Engine-independent indexes: subtable method
added support for so-called "high level indexes", they are not visible
to the storage engine, implemented on the sql level. For every such
an index in a table, say, t1, the server implicitly creates a second
table named, like, t1#i#05 (where "05" is the index number in t1).
This table has a fixed structure, no frm, not accessible directly,
doesn't go into the table cache, needs no MDLs.
MDEV-33406 basic optimizer support for k-NN searches
for a query like SELECT ... ORDER BY func() optimizer will use
item_func->part_of_sortkey() to decide what keys can be used
to resolve ORDER BY.
2024-01-17 15:32:45 +01:00
|
|
|
/*
|
|
|
|
Copyright (c) 2024, MariaDB plc
|
|
|
|
|
|
|
|
This program is free software; you can redistribute it and/or modify
|
|
|
|
it under the terms of the GNU General Public License as published by
|
|
|
|
the Free Software Foundation; version 2 of the License.
|
|
|
|
|
|
|
|
This program is distributed in the hope that it will be useful,
|
|
|
|
but WITHOUT ANY WARRANTY; without even the implied warranty of
|
|
|
|
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
|
|
|
GNU General Public License for more details.
|
|
|
|
|
|
|
|
You should have received a copy of the GNU General Public License
|
|
|
|
along with this program; if not, write to the Free Software
|
|
|
|
Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1335 USA
|
|
|
|
*/
|
|
|
|
|
|
|
|
#include <my_global.h>
|
2024-07-16 15:15:17 +02:00
|
|
|
#include "key.h" // key_copy()
|
initial support for vector indexes
MDEV-33407 Parser support for vector indexes
The syntax is
create table t1 (... vector index (v) ...);
limitation:
* v is a binary string and NOT NULL
* only one vector index per table
* temporary tables are not supported
MDEV-33404 Engine-independent indexes: subtable method
added support for so-called "high level indexes", they are not visible
to the storage engine, implemented on the sql level. For every such
an index in a table, say, t1, the server implicitly creates a second
table named, like, t1#i#05 (where "05" is the index number in t1).
This table has a fixed structure, no frm, not accessible directly,
doesn't go into the table cache, needs no MDLs.
MDEV-33406 basic optimizer support for k-NN searches
for a query like SELECT ... ORDER BY func() optimizer will use
item_func->part_of_sortkey() to decide what keys can be used
to resolve ORDER BY.
2024-01-17 15:32:45 +01:00
|
|
|
#include "vector_mhnsw.h"
|
Initial HNSW implementation
This commit includes the work done in collaboration with Hugo Wen from
Amazon:
MDEV-33408 Alter HNSW graph storage and fix memory leak
This commit changes the way HNSW graph information is stored in the
second table. Instead of storing connections as separate records, it now
stores neighbors for each node, leading to significant performance
improvements and storage savings.
Comparing with the previous approach, the insert speed is 5 times faster,
search speed improves by 23%, and storage usage is reduced by 73%, based
on ann-benchmark tests with random-xs-20-euclidean and
random-s-100-euclidean datasets.
Additionally, in previous code, vector objects were not released after
use, resulting in excessive memory consumption (over 20GB for building
the index with 90,000 records), preventing tests with large datasets.
Now ensure that vectors are released appropriately during the insert and
search functions. Note there are still some vectors that need to be
cleaned up after search query completion. Needs to be addressed in a
future commit.
All new code of the whole pull request, including one or several files
that are either new files or modified ones, are contributed under the
BSD-new license. I am contributing on behalf of my employer Amazon Web
Services, Inc.
As well as the commit:
Introduce session variables to manage HNSW index parameters
Three variables:
hnsw_max_connection_per_layer
hnsw_ef_constructor
hnsw_ef_search
ann-benchmark tool is also updated to support these variables in commit
https://github.com/HugoWenTD/ann-benchmarks/commit/e09784e for branch
https://github.com/HugoWenTD/ann-benchmarks/tree/mariadb-configurable
All new code of the whole pull request, including one or several files
that are either new files or modified ones, are contributed under the
BSD-new license. I am contributing on behalf of my employer Amazon Web
Services, Inc.
Co-authored-by: Hugo Wen <wenhug@amazon.com>
2024-02-17 17:03:30 +02:00
|
|
|
#include "item_vectorfunc.h"
|
2024-06-01 00:17:05 +02:00
|
|
|
#include <scope.h>
|
mhnsw: inter-statement shared cache
* preserve the graph in memory between statements
* keep it in a TABLE_SHARE, available for concurrent searches
* nodes are generally read-only, walking the graph doesn't change them
* distance to target is cached, calculated only once
* SIMD-optimized bloom filter detects visited nodes
* nodes are stored in an array, not List, to better utilize bloom filter
* auto-adjusting heuristic to estimate the number of visited nodes
(to configure the bloom filter)
* many threads can concurrently walk the graph. MEM_ROOT and Hash_set
are protected with a mutex, but walking doesn't need them
* up to 8 threads can concurrently load nodes into the cache,
nodes are partitioned into 8 mutexes (8 is chosen arbitrarily, might
need tuning)
* concurrent editing is not supported though
* this is fine for MyISAM, TL_WRITE protects the TABLE_SHARE and the
graph (note that TL_WRITE_CONCURRENT_INSERT is not allowed, because an
INSERT into the main table means multiple UPDATEs in the graph)
* InnoDB uses secondary transaction-level caches linked in a list in
in thd->ha_data via a fake handlerton
* on rollback the secondary cache is discarded, on commit nodes
from the secondary cache are invalidated in the shared cache
while it is exclusively locked
* on savepoint rollback both caches are flushed. this can be improved
in the future with a row visibility callback
* graph size is controlled by @@mhnsw_cache_size, the cache is flushed
when it reaches the threshold
2024-07-17 17:16:28 +02:00
|
|
|
#include <my_atomic_wrapper.h>
|
|
|
|
#include "bloom_filters.h"
|
|
|
|
|
|
|
|
ulonglong mhnsw_cache_size;
|
Initial HNSW implementation
This commit includes the work done in collaboration with Hugo Wen from
Amazon:
MDEV-33408 Alter HNSW graph storage and fix memory leak
This commit changes the way HNSW graph information is stored in the
second table. Instead of storing connections as separate records, it now
stores neighbors for each node, leading to significant performance
improvements and storage savings.
Comparing with the previous approach, the insert speed is 5 times faster,
search speed improves by 23%, and storage usage is reduced by 73%, based
on ann-benchmark tests with random-xs-20-euclidean and
random-s-100-euclidean datasets.
Additionally, in previous code, vector objects were not released after
use, resulting in excessive memory consumption (over 20GB for building
the index with 90,000 records), preventing tests with large datasets.
Now ensure that vectors are released appropriately during the insert and
search functions. Note there are still some vectors that need to be
cleaned up after search query completion. Needs to be addressed in a
future commit.
All new code of the whole pull request, including one or several files
that are either new files or modified ones, are contributed under the
BSD-new license. I am contributing on behalf of my employer Amazon Web
Services, Inc.
As well as the commit:
Introduce session variables to manage HNSW index parameters
Three variables:
hnsw_max_connection_per_layer
hnsw_ef_constructor
hnsw_ef_search
ann-benchmark tool is also updated to support these variables in commit
https://github.com/HugoWenTD/ann-benchmarks/commit/e09784e for branch
https://github.com/HugoWenTD/ann-benchmarks/tree/mariadb-configurable
All new code of the whole pull request, including one or several files
that are either new files or modified ones, are contributed under the
BSD-new license. I am contributing on behalf of my employer Amazon Web
Services, Inc.
Co-authored-by: Hugo Wen <wenhug@amazon.com>
2024-02-17 17:03:30 +02:00
|
|
|
|
2024-06-11 12:58:41 +02:00
|
|
|
// Algorithm parameters
|
|
|
|
static constexpr float alpha = 1.1f;
|
|
|
|
static constexpr uint ef_construction= 10;
|
|
|
|
|
2024-06-13 23:24:51 +02:00
|
|
|
enum Graph_table_fields {
|
|
|
|
FIELD_LAYER, FIELD_TREF, FIELD_VEC, FIELD_NEIGHBORS
|
|
|
|
};
|
|
|
|
enum Graph_table_indices {
|
2024-07-18 14:43:47 +02:00
|
|
|
IDX_TREF, IDX_LAYER
|
2024-06-13 23:24:51 +02:00
|
|
|
};
|
|
|
|
|
2024-06-04 14:47:52 +02:00
|
|
|
class MHNSW_Context;
|
mhnsw: inter-statement shared cache
* preserve the graph in memory between statements
* keep it in a TABLE_SHARE, available for concurrent searches
* nodes are generally read-only, walking the graph doesn't change them
* distance to target is cached, calculated only once
* SIMD-optimized bloom filter detects visited nodes
* nodes are stored in an array, not List, to better utilize bloom filter
* auto-adjusting heuristic to estimate the number of visited nodes
(to configure the bloom filter)
* many threads can concurrently walk the graph. MEM_ROOT and Hash_set
are protected with a mutex, but walking doesn't need them
* up to 8 threads can concurrently load nodes into the cache,
nodes are partitioned into 8 mutexes (8 is chosen arbitrarily, might
need tuning)
* concurrent editing is not supported though
* this is fine for MyISAM, TL_WRITE protects the TABLE_SHARE and the
graph (note that TL_WRITE_CONCURRENT_INSERT is not allowed, because an
INSERT into the main table means multiple UPDATEs in the graph)
* InnoDB uses secondary transaction-level caches linked in a list in
in thd->ha_data via a fake handlerton
* on rollback the secondary cache is discarded, on commit nodes
from the secondary cache are invalidated in the shared cache
while it is exclusively locked
* on savepoint rollback both caches are flushed. this can be improved
in the future with a row visibility callback
* graph size is controlled by @@mhnsw_cache_size, the cache is flushed
when it reaches the threshold
2024-07-17 17:16:28 +02:00
|
|
|
class FVectorNode;
|
|
|
|
|
|
|
|
/*
|
2024-07-19 12:25:25 +02:00
|
|
|
One vector, an array of coordinates in ctx->vec_len dimensions
|
mhnsw: inter-statement shared cache
* preserve the graph in memory between statements
* keep it in a TABLE_SHARE, available for concurrent searches
* nodes are generally read-only, walking the graph doesn't change them
* distance to target is cached, calculated only once
* SIMD-optimized bloom filter detects visited nodes
* nodes are stored in an array, not List, to better utilize bloom filter
* auto-adjusting heuristic to estimate the number of visited nodes
(to configure the bloom filter)
* many threads can concurrently walk the graph. MEM_ROOT and Hash_set
are protected with a mutex, but walking doesn't need them
* up to 8 threads can concurrently load nodes into the cache,
nodes are partitioned into 8 mutexes (8 is chosen arbitrarily, might
need tuning)
* concurrent editing is not supported though
* this is fine for MyISAM, TL_WRITE protects the TABLE_SHARE and the
graph (note that TL_WRITE_CONCURRENT_INSERT is not allowed, because an
INSERT into the main table means multiple UPDATEs in the graph)
* InnoDB uses secondary transaction-level caches linked in a list in
in thd->ha_data via a fake handlerton
* on rollback the secondary cache is discarded, on commit nodes
from the secondary cache are invalidated in the shared cache
while it is exclusively locked
* on savepoint rollback both caches are flushed. this can be improved
in the future with a row visibility callback
* graph size is controlled by @@mhnsw_cache_size, the cache is flushed
when it reaches the threshold
2024-07-17 17:16:28 +02:00
|
|
|
*/
|
2024-07-19 12:25:25 +02:00
|
|
|
#pragma pack(push, 1)
|
|
|
|
struct FVector
|
Initial HNSW implementation
This commit includes the work done in collaboration with Hugo Wen from
Amazon:
MDEV-33408 Alter HNSW graph storage and fix memory leak
This commit changes the way HNSW graph information is stored in the
second table. Instead of storing connections as separate records, it now
stores neighbors for each node, leading to significant performance
improvements and storage savings.
Comparing with the previous approach, the insert speed is 5 times faster,
search speed improves by 23%, and storage usage is reduced by 73%, based
on ann-benchmark tests with random-xs-20-euclidean and
random-s-100-euclidean datasets.
Additionally, in previous code, vector objects were not released after
use, resulting in excessive memory consumption (over 20GB for building
the index with 90,000 records), preventing tests with large datasets.
Now ensure that vectors are released appropriately during the insert and
search functions. Note there are still some vectors that need to be
cleaned up after search query completion. Needs to be addressed in a
future commit.
All new code of the whole pull request, including one or several files
that are either new files or modified ones, are contributed under the
BSD-new license. I am contributing on behalf of my employer Amazon Web
Services, Inc.
As well as the commit:
Introduce session variables to manage HNSW index parameters
Three variables:
hnsw_max_connection_per_layer
hnsw_ef_constructor
hnsw_ef_search
ann-benchmark tool is also updated to support these variables in commit
https://github.com/HugoWenTD/ann-benchmarks/commit/e09784e for branch
https://github.com/HugoWenTD/ann-benchmarks/tree/mariadb-configurable
All new code of the whole pull request, including one or several files
that are either new files or modified ones, are contributed under the
BSD-new license. I am contributing on behalf of my employer Amazon Web
Services, Inc.
Co-authored-by: Hugo Wen <wenhug@amazon.com>
2024-02-17 17:03:30 +02:00
|
|
|
{
|
2024-07-19 12:25:25 +02:00
|
|
|
static constexpr size_t data_header= sizeof(float);
|
|
|
|
static constexpr size_t alloc_header= data_header + sizeof(float);
|
|
|
|
|
|
|
|
float abs2, scale;
|
|
|
|
int16_t dims[4];
|
|
|
|
|
|
|
|
uchar *data() const { return (uchar*)(&scale); }
|
|
|
|
|
|
|
|
static size_t data_size(size_t n)
|
|
|
|
{ return data_header + n*2; }
|
|
|
|
|
|
|
|
static size_t data_to_value_size(size_t data_size)
|
|
|
|
{ return (data_size - data_header)*2; }
|
|
|
|
|
|
|
|
static const FVector *create(void *mem, const void *src, size_t src_len)
|
|
|
|
{
|
|
|
|
float scale=0, *v= (float *)src;
|
|
|
|
size_t vec_len= src_len / sizeof(float);
|
|
|
|
for (size_t i= 0; i < vec_len; i++)
|
|
|
|
if (std::abs(scale) < std::abs(get_float(v + i)))
|
|
|
|
scale= get_float(v + i);
|
|
|
|
|
|
|
|
FVector *vec= align_ptr(mem);
|
|
|
|
vec->scale= scale ? scale/32767 : 1;
|
|
|
|
for (size_t i= 0; i < vec_len; i++)
|
|
|
|
vec->dims[i] = static_cast<int16_t>(std::round(get_float(v + i) / vec->scale));
|
|
|
|
vec->postprocess(vec_len);
|
|
|
|
return vec;
|
|
|
|
}
|
|
|
|
|
|
|
|
void postprocess(size_t vec_len)
|
|
|
|
{
|
|
|
|
fix_tail(vec_len);
|
|
|
|
abs2= scale * scale * dot_product(dims, dims, vec_len) / 2;
|
|
|
|
}
|
|
|
|
|
|
|
|
#ifdef AVX2_IMPLEMENTATION
|
|
|
|
/************* AVX2 *****************************************************/
|
|
|
|
static constexpr size_t AVX2_bytes= 256/8;
|
|
|
|
static constexpr size_t AVX2_dims= AVX2_bytes/sizeof(int16_t);
|
|
|
|
|
|
|
|
AVX2_IMPLEMENTATION
|
|
|
|
static float dot_product(const int16_t *v1, const int16_t *v2, size_t len)
|
|
|
|
{
|
|
|
|
typedef float v8f __attribute__((vector_size(AVX2_bytes)));
|
|
|
|
union { v8f v; __m256 i; } tmp;
|
|
|
|
__m256i *p1= (__m256i*)v1;
|
|
|
|
__m256i *p2= (__m256i*)v2;
|
|
|
|
v8f d= {0};
|
|
|
|
for (size_t i= 0; i < (len + AVX2_dims-1)/AVX2_dims; p1++, p2++, i++)
|
|
|
|
{
|
|
|
|
tmp.i= _mm256_cvtepi32_ps(_mm256_madd_epi16(*p1, *p2));
|
|
|
|
d+= tmp.v;
|
|
|
|
}
|
|
|
|
return d[0] + d[1] + d[2] + d[3] + d[4] + d[5] + d[6] + d[7];
|
|
|
|
}
|
|
|
|
|
|
|
|
AVX2_IMPLEMENTATION
|
|
|
|
static size_t alloc_size(size_t n)
|
|
|
|
{ return alloc_header + MY_ALIGN(n*2, AVX2_bytes) + AVX2_bytes - 1; }
|
|
|
|
|
|
|
|
AVX2_IMPLEMENTATION
|
|
|
|
static FVector *align_ptr(void *ptr)
|
|
|
|
{ return (FVector*)(MY_ALIGN(((intptr)ptr) + alloc_header, AVX2_bytes)
|
|
|
|
- alloc_header); }
|
|
|
|
|
|
|
|
AVX2_IMPLEMENTATION
|
|
|
|
void fix_tail(size_t vec_len)
|
|
|
|
{
|
|
|
|
bzero(dims + vec_len, (MY_ALIGN(vec_len, AVX2_dims) - vec_len)*2);
|
|
|
|
}
|
|
|
|
#endif
|
|
|
|
|
|
|
|
/************* no-SIMD default ******************************************/
|
|
|
|
DEFAULT_IMPLEMENTATION
|
|
|
|
static float dot_product(const int16_t *v1, const int16_t *v2, size_t len)
|
|
|
|
{
|
|
|
|
int64_t d= 0;
|
|
|
|
for (size_t i= 0; i < len; i++)
|
|
|
|
d+= int32_t(v1[i]) * int32_t(v2[i]);
|
|
|
|
return static_cast<float>(d);
|
|
|
|
}
|
|
|
|
|
|
|
|
DEFAULT_IMPLEMENTATION
|
|
|
|
static size_t alloc_size(size_t n) { return alloc_header + n*2; }
|
|
|
|
|
|
|
|
DEFAULT_IMPLEMENTATION
|
|
|
|
static FVector *align_ptr(void *ptr) { return (FVector*)ptr; }
|
|
|
|
|
|
|
|
DEFAULT_IMPLEMENTATION
|
|
|
|
void fix_tail(size_t) { }
|
|
|
|
|
|
|
|
float distance_to(const FVector *other, size_t vec_len) const
|
|
|
|
{
|
|
|
|
return abs2 + other->abs2 - scale * other->scale *
|
|
|
|
dot_product(dims, other->dims, vec_len);
|
|
|
|
}
|
mhnsw: inter-statement shared cache
* preserve the graph in memory between statements
* keep it in a TABLE_SHARE, available for concurrent searches
* nodes are generally read-only, walking the graph doesn't change them
* distance to target is cached, calculated only once
* SIMD-optimized bloom filter detects visited nodes
* nodes are stored in an array, not List, to better utilize bloom filter
* auto-adjusting heuristic to estimate the number of visited nodes
(to configure the bloom filter)
* many threads can concurrently walk the graph. MEM_ROOT and Hash_set
are protected with a mutex, but walking doesn't need them
* up to 8 threads can concurrently load nodes into the cache,
nodes are partitioned into 8 mutexes (8 is chosen arbitrarily, might
need tuning)
* concurrent editing is not supported though
* this is fine for MyISAM, TL_WRITE protects the TABLE_SHARE and the
graph (note that TL_WRITE_CONCURRENT_INSERT is not allowed, because an
INSERT into the main table means multiple UPDATEs in the graph)
* InnoDB uses secondary transaction-level caches linked in a list in
in thd->ha_data via a fake handlerton
* on rollback the secondary cache is discarded, on commit nodes
from the secondary cache are invalidated in the shared cache
while it is exclusively locked
* on savepoint rollback both caches are flushed. this can be improved
in the future with a row visibility callback
* graph size is controlled by @@mhnsw_cache_size, the cache is flushed
when it reaches the threshold
2024-07-17 17:16:28 +02:00
|
|
|
};
|
2024-07-19 12:25:25 +02:00
|
|
|
#pragma pack(pop)
|
mhnsw: inter-statement shared cache
* preserve the graph in memory between statements
* keep it in a TABLE_SHARE, available for concurrent searches
* nodes are generally read-only, walking the graph doesn't change them
* distance to target is cached, calculated only once
* SIMD-optimized bloom filter detects visited nodes
* nodes are stored in an array, not List, to better utilize bloom filter
* auto-adjusting heuristic to estimate the number of visited nodes
(to configure the bloom filter)
* many threads can concurrently walk the graph. MEM_ROOT and Hash_set
are protected with a mutex, but walking doesn't need them
* up to 8 threads can concurrently load nodes into the cache,
nodes are partitioned into 8 mutexes (8 is chosen arbitrarily, might
need tuning)
* concurrent editing is not supported though
* this is fine for MyISAM, TL_WRITE protects the TABLE_SHARE and the
graph (note that TL_WRITE_CONCURRENT_INSERT is not allowed, because an
INSERT into the main table means multiple UPDATEs in the graph)
* InnoDB uses secondary transaction-level caches linked in a list in
in thd->ha_data via a fake handlerton
* on rollback the secondary cache is discarded, on commit nodes
from the secondary cache are invalidated in the shared cache
while it is exclusively locked
* on savepoint rollback both caches are flushed. this can be improved
in the future with a row visibility callback
* graph size is controlled by @@mhnsw_cache_size, the cache is flushed
when it reaches the threshold
2024-07-17 17:16:28 +02:00
|
|
|
|
|
|
|
/*
|
|
|
|
An array of pointers to graph nodes
|
|
|
|
|
|
|
|
It's mainly used to store all neighbors of a given node on a given layer.
|
|
|
|
|
|
|
|
An array is fixed size, 2*M for the zero layer, M for other layers
|
|
|
|
see MHNSW_Context::max_neighbors().
|
|
|
|
|
|
|
|
Number of neighbors is zero-padded to multiples of 8 (for SIMD Bloom filter).
|
|
|
|
|
|
|
|
Also used as a simply array of nodes in search_layer, the array size
|
|
|
|
then is defined by ef or efConstruction.
|
|
|
|
*/
|
|
|
|
struct Neighborhood: public Sql_alloc
|
|
|
|
{
|
|
|
|
FVectorNode **links;
|
|
|
|
size_t num;
|
|
|
|
FVectorNode **init(FVectorNode **ptr, size_t n)
|
|
|
|
{
|
|
|
|
num= 0;
|
|
|
|
links= ptr;
|
|
|
|
n= MY_ALIGN(n, 8);
|
|
|
|
bzero(ptr, n*sizeof(*ptr));
|
|
|
|
return ptr + n;
|
|
|
|
}
|
Initial HNSW implementation
This commit includes the work done in collaboration with Hugo Wen from
Amazon:
MDEV-33408 Alter HNSW graph storage and fix memory leak
This commit changes the way HNSW graph information is stored in the
second table. Instead of storing connections as separate records, it now
stores neighbors for each node, leading to significant performance
improvements and storage savings.
Comparing with the previous approach, the insert speed is 5 times faster,
search speed improves by 23%, and storage usage is reduced by 73%, based
on ann-benchmark tests with random-xs-20-euclidean and
random-s-100-euclidean datasets.
Additionally, in previous code, vector objects were not released after
use, resulting in excessive memory consumption (over 20GB for building
the index with 90,000 records), preventing tests with large datasets.
Now ensure that vectors are released appropriately during the insert and
search functions. Note there are still some vectors that need to be
cleaned up after search query completion. Needs to be addressed in a
future commit.
All new code of the whole pull request, including one or several files
that are either new files or modified ones, are contributed under the
BSD-new license. I am contributing on behalf of my employer Amazon Web
Services, Inc.
As well as the commit:
Introduce session variables to manage HNSW index parameters
Three variables:
hnsw_max_connection_per_layer
hnsw_ef_constructor
hnsw_ef_search
ann-benchmark tool is also updated to support these variables in commit
https://github.com/HugoWenTD/ann-benchmarks/commit/e09784e for branch
https://github.com/HugoWenTD/ann-benchmarks/tree/mariadb-configurable
All new code of the whole pull request, including one or several files
that are either new files or modified ones, are contributed under the
BSD-new license. I am contributing on behalf of my employer Amazon Web
Services, Inc.
Co-authored-by: Hugo Wen <wenhug@amazon.com>
2024-02-17 17:03:30 +02:00
|
|
|
};
|
|
|
|
|
mhnsw: inter-statement shared cache
* preserve the graph in memory between statements
* keep it in a TABLE_SHARE, available for concurrent searches
* nodes are generally read-only, walking the graph doesn't change them
* distance to target is cached, calculated only once
* SIMD-optimized bloom filter detects visited nodes
* nodes are stored in an array, not List, to better utilize bloom filter
* auto-adjusting heuristic to estimate the number of visited nodes
(to configure the bloom filter)
* many threads can concurrently walk the graph. MEM_ROOT and Hash_set
are protected with a mutex, but walking doesn't need them
* up to 8 threads can concurrently load nodes into the cache,
nodes are partitioned into 8 mutexes (8 is chosen arbitrarily, might
need tuning)
* concurrent editing is not supported though
* this is fine for MyISAM, TL_WRITE protects the TABLE_SHARE and the
graph (note that TL_WRITE_CONCURRENT_INSERT is not allowed, because an
INSERT into the main table means multiple UPDATEs in the graph)
* InnoDB uses secondary transaction-level caches linked in a list in
in thd->ha_data via a fake handlerton
* on rollback the secondary cache is discarded, on commit nodes
from the secondary cache are invalidated in the shared cache
while it is exclusively locked
* on savepoint rollback both caches are flushed. this can be improved
in the future with a row visibility callback
* graph size is controlled by @@mhnsw_cache_size, the cache is flushed
when it reaches the threshold
2024-07-17 17:16:28 +02:00
|
|
|
|
|
|
|
/*
|
|
|
|
One node in a graph = one row in the graph table
|
|
|
|
|
|
|
|
stores a vector itself, ref (= position) in the graph (= hlindex)
|
|
|
|
table, a ref in the main table, and an array of Neighborhood's, one
|
|
|
|
per layer.
|
|
|
|
|
|
|
|
It's lazily initialized, may know only gref, everything else is
|
|
|
|
loaded on demand.
|
|
|
|
|
|
|
|
On the other hand, on INSERT the new node knows everything except
|
|
|
|
gref - which only becomes known after ha_write_row.
|
|
|
|
|
|
|
|
Allocated on memroot in two chunks. One is the same size for all nodes
|
|
|
|
and stores FVectorNode object, gref, tref, and vector. The second
|
|
|
|
stores neighbors, all Neighborhood's together, its size depends
|
|
|
|
on the number of layers this node is on.
|
|
|
|
|
|
|
|
There can be millions of nodes in the cache and the cache size
|
|
|
|
is constrained by mhnsw_cache_size, so every byte matters here
|
|
|
|
*/
|
|
|
|
#pragma pack(push, 1)
|
2024-07-19 12:25:25 +02:00
|
|
|
class FVectorNode
|
Initial HNSW implementation
This commit includes the work done in collaboration with Hugo Wen from
Amazon:
MDEV-33408 Alter HNSW graph storage and fix memory leak
This commit changes the way HNSW graph information is stored in the
second table. Instead of storing connections as separate records, it now
stores neighbors for each node, leading to significant performance
improvements and storage savings.
Comparing with the previous approach, the insert speed is 5 times faster,
search speed improves by 23%, and storage usage is reduced by 73%, based
on ann-benchmark tests with random-xs-20-euclidean and
random-s-100-euclidean datasets.
Additionally, in previous code, vector objects were not released after
use, resulting in excessive memory consumption (over 20GB for building
the index with 90,000 records), preventing tests with large datasets.
Now ensure that vectors are released appropriately during the insert and
search functions. Note there are still some vectors that need to be
cleaned up after search query completion. Needs to be addressed in a
future commit.
All new code of the whole pull request, including one or several files
that are either new files or modified ones, are contributed under the
BSD-new license. I am contributing on behalf of my employer Amazon Web
Services, Inc.
As well as the commit:
Introduce session variables to manage HNSW index parameters
Three variables:
hnsw_max_connection_per_layer
hnsw_ef_constructor
hnsw_ef_search
ann-benchmark tool is also updated to support these variables in commit
https://github.com/HugoWenTD/ann-benchmarks/commit/e09784e for branch
https://github.com/HugoWenTD/ann-benchmarks/tree/mariadb-configurable
All new code of the whole pull request, including one or several files
that are either new files or modified ones, are contributed under the
BSD-new license. I am contributing on behalf of my employer Amazon Web
Services, Inc.
Co-authored-by: Hugo Wen <wenhug@amazon.com>
2024-02-17 17:03:30 +02:00
|
|
|
{
|
|
|
|
private:
|
mhnsw: inter-statement shared cache
* preserve the graph in memory between statements
* keep it in a TABLE_SHARE, available for concurrent searches
* nodes are generally read-only, walking the graph doesn't change them
* distance to target is cached, calculated only once
* SIMD-optimized bloom filter detects visited nodes
* nodes are stored in an array, not List, to better utilize bloom filter
* auto-adjusting heuristic to estimate the number of visited nodes
(to configure the bloom filter)
* many threads can concurrently walk the graph. MEM_ROOT and Hash_set
are protected with a mutex, but walking doesn't need them
* up to 8 threads can concurrently load nodes into the cache,
nodes are partitioned into 8 mutexes (8 is chosen arbitrarily, might
need tuning)
* concurrent editing is not supported though
* this is fine for MyISAM, TL_WRITE protects the TABLE_SHARE and the
graph (note that TL_WRITE_CONCURRENT_INSERT is not allowed, because an
INSERT into the main table means multiple UPDATEs in the graph)
* InnoDB uses secondary transaction-level caches linked in a list in
in thd->ha_data via a fake handlerton
* on rollback the secondary cache is discarded, on commit nodes
from the secondary cache are invalidated in the shared cache
while it is exclusively locked
* on savepoint rollback both caches are flushed. this can be improved
in the future with a row visibility callback
* graph size is controlled by @@mhnsw_cache_size, the cache is flushed
when it reaches the threshold
2024-07-17 17:16:28 +02:00
|
|
|
MHNSW_Context *ctx;
|
2024-06-13 23:24:51 +02:00
|
|
|
|
2024-07-19 12:25:25 +02:00
|
|
|
const FVector *make_vec(const void *v);
|
2024-06-13 23:24:51 +02:00
|
|
|
int alloc_neighborhood(uint8_t layer);
|
Initial HNSW implementation
This commit includes the work done in collaboration with Hugo Wen from
Amazon:
MDEV-33408 Alter HNSW graph storage and fix memory leak
This commit changes the way HNSW graph information is stored in the
second table. Instead of storing connections as separate records, it now
stores neighbors for each node, leading to significant performance
improvements and storage savings.
Comparing with the previous approach, the insert speed is 5 times faster,
search speed improves by 23%, and storage usage is reduced by 73%, based
on ann-benchmark tests with random-xs-20-euclidean and
random-s-100-euclidean datasets.
Additionally, in previous code, vector objects were not released after
use, resulting in excessive memory consumption (over 20GB for building
the index with 90,000 records), preventing tests with large datasets.
Now ensure that vectors are released appropriately during the insert and
search functions. Note there are still some vectors that need to be
cleaned up after search query completion. Needs to be addressed in a
future commit.
All new code of the whole pull request, including one or several files
that are either new files or modified ones, are contributed under the
BSD-new license. I am contributing on behalf of my employer Amazon Web
Services, Inc.
As well as the commit:
Introduce session variables to manage HNSW index parameters
Three variables:
hnsw_max_connection_per_layer
hnsw_ef_constructor
hnsw_ef_search
ann-benchmark tool is also updated to support these variables in commit
https://github.com/HugoWenTD/ann-benchmarks/commit/e09784e for branch
https://github.com/HugoWenTD/ann-benchmarks/tree/mariadb-configurable
All new code of the whole pull request, including one or several files
that are either new files or modified ones, are contributed under the
BSD-new license. I am contributing on behalf of my employer Amazon Web
Services, Inc.
Co-authored-by: Hugo Wen <wenhug@amazon.com>
2024-02-17 17:03:30 +02:00
|
|
|
public:
|
2024-07-19 12:25:25 +02:00
|
|
|
const FVector *vec= nullptr;
|
mhnsw: inter-statement shared cache
* preserve the graph in memory between statements
* keep it in a TABLE_SHARE, available for concurrent searches
* nodes are generally read-only, walking the graph doesn't change them
* distance to target is cached, calculated only once
* SIMD-optimized bloom filter detects visited nodes
* nodes are stored in an array, not List, to better utilize bloom filter
* auto-adjusting heuristic to estimate the number of visited nodes
(to configure the bloom filter)
* many threads can concurrently walk the graph. MEM_ROOT and Hash_set
are protected with a mutex, but walking doesn't need them
* up to 8 threads can concurrently load nodes into the cache,
nodes are partitioned into 8 mutexes (8 is chosen arbitrarily, might
need tuning)
* concurrent editing is not supported though
* this is fine for MyISAM, TL_WRITE protects the TABLE_SHARE and the
graph (note that TL_WRITE_CONCURRENT_INSERT is not allowed, because an
INSERT into the main table means multiple UPDATEs in the graph)
* InnoDB uses secondary transaction-level caches linked in a list in
in thd->ha_data via a fake handlerton
* on rollback the secondary cache is discarded, on commit nodes
from the secondary cache are invalidated in the shared cache
while it is exclusively locked
* on savepoint rollback both caches are flushed. this can be improved
in the future with a row visibility callback
* graph size is controlled by @@mhnsw_cache_size, the cache is flushed
when it reaches the threshold
2024-07-17 17:16:28 +02:00
|
|
|
Neighborhood *neighbors= nullptr;
|
|
|
|
uint8_t max_layer;
|
2024-07-16 15:15:17 +02:00
|
|
|
bool stored:1, deleted:1;
|
2024-06-13 23:24:51 +02:00
|
|
|
|
|
|
|
FVectorNode(MHNSW_Context *ctx_, const void *gref_);
|
|
|
|
FVectorNode(MHNSW_Context *ctx_, const void *tref_, uint8_t layer,
|
|
|
|
const void *vec_);
|
2024-07-19 12:25:25 +02:00
|
|
|
float distance_to(const FVector *other) const;
|
mhnsw: inter-statement shared cache
* preserve the graph in memory between statements
* keep it in a TABLE_SHARE, available for concurrent searches
* nodes are generally read-only, walking the graph doesn't change them
* distance to target is cached, calculated only once
* SIMD-optimized bloom filter detects visited nodes
* nodes are stored in an array, not List, to better utilize bloom filter
* auto-adjusting heuristic to estimate the number of visited nodes
(to configure the bloom filter)
* many threads can concurrently walk the graph. MEM_ROOT and Hash_set
are protected with a mutex, but walking doesn't need them
* up to 8 threads can concurrently load nodes into the cache,
nodes are partitioned into 8 mutexes (8 is chosen arbitrarily, might
need tuning)
* concurrent editing is not supported though
* this is fine for MyISAM, TL_WRITE protects the TABLE_SHARE and the
graph (note that TL_WRITE_CONCURRENT_INSERT is not allowed, because an
INSERT into the main table means multiple UPDATEs in the graph)
* InnoDB uses secondary transaction-level caches linked in a list in
in thd->ha_data via a fake handlerton
* on rollback the secondary cache is discarded, on commit nodes
from the secondary cache are invalidated in the shared cache
while it is exclusively locked
* on savepoint rollback both caches are flushed. this can be improved
in the future with a row visibility callback
* graph size is controlled by @@mhnsw_cache_size, the cache is flushed
when it reaches the threshold
2024-07-17 17:16:28 +02:00
|
|
|
int load(TABLE *graph);
|
|
|
|
int load_from_record(TABLE *graph);
|
|
|
|
int save(TABLE *graph);
|
|
|
|
size_t tref_len() const;
|
|
|
|
size_t gref_len() const;
|
|
|
|
uchar *gref() const;
|
|
|
|
uchar *tref() const;
|
|
|
|
void push_neighbor(size_t layer, FVectorNode *v);
|
2024-06-04 14:47:52 +02:00
|
|
|
|
|
|
|
static uchar *get_key(const FVectorNode *elem, size_t *key_len, my_bool);
|
2024-06-03 15:21:57 +02:00
|
|
|
};
|
mhnsw: inter-statement shared cache
* preserve the graph in memory between statements
* keep it in a TABLE_SHARE, available for concurrent searches
* nodes are generally read-only, walking the graph doesn't change them
* distance to target is cached, calculated only once
* SIMD-optimized bloom filter detects visited nodes
* nodes are stored in an array, not List, to better utilize bloom filter
* auto-adjusting heuristic to estimate the number of visited nodes
(to configure the bloom filter)
* many threads can concurrently walk the graph. MEM_ROOT and Hash_set
are protected with a mutex, but walking doesn't need them
* up to 8 threads can concurrently load nodes into the cache,
nodes are partitioned into 8 mutexes (8 is chosen arbitrarily, might
need tuning)
* concurrent editing is not supported though
* this is fine for MyISAM, TL_WRITE protects the TABLE_SHARE and the
graph (note that TL_WRITE_CONCURRENT_INSERT is not allowed, because an
INSERT into the main table means multiple UPDATEs in the graph)
* InnoDB uses secondary transaction-level caches linked in a list in
in thd->ha_data via a fake handlerton
* on rollback the secondary cache is discarded, on commit nodes
from the secondary cache are invalidated in the shared cache
while it is exclusively locked
* on savepoint rollback both caches are flushed. this can be improved
in the future with a row visibility callback
* graph size is controlled by @@mhnsw_cache_size, the cache is flushed
when it reaches the threshold
2024-07-17 17:16:28 +02:00
|
|
|
#pragma pack(pop)
|
2024-06-03 15:21:57 +02:00
|
|
|
|
mhnsw: inter-statement shared cache
* preserve the graph in memory between statements
* keep it in a TABLE_SHARE, available for concurrent searches
* nodes are generally read-only, walking the graph doesn't change them
* distance to target is cached, calculated only once
* SIMD-optimized bloom filter detects visited nodes
* nodes are stored in an array, not List, to better utilize bloom filter
* auto-adjusting heuristic to estimate the number of visited nodes
(to configure the bloom filter)
* many threads can concurrently walk the graph. MEM_ROOT and Hash_set
are protected with a mutex, but walking doesn't need them
* up to 8 threads can concurrently load nodes into the cache,
nodes are partitioned into 8 mutexes (8 is chosen arbitrarily, might
need tuning)
* concurrent editing is not supported though
* this is fine for MyISAM, TL_WRITE protects the TABLE_SHARE and the
graph (note that TL_WRITE_CONCURRENT_INSERT is not allowed, because an
INSERT into the main table means multiple UPDATEs in the graph)
* InnoDB uses secondary transaction-level caches linked in a list in
in thd->ha_data via a fake handlerton
* on rollback the secondary cache is discarded, on commit nodes
from the secondary cache are invalidated in the shared cache
while it is exclusively locked
* on savepoint rollback both caches are flushed. this can be improved
in the future with a row visibility callback
* graph size is controlled by @@mhnsw_cache_size, the cache is flushed
when it reaches the threshold
2024-07-17 17:16:28 +02:00
|
|
|
/*
|
|
|
|
Shared algorithm context. The graph.
|
|
|
|
|
|
|
|
Stored in TABLE_SHARE and on TABLE_SHARE::mem_root.
|
|
|
|
Stores the complete graph in MHNSW_Context::root,
|
|
|
|
The mapping gref->FVectorNode is in the node_cache.
|
|
|
|
Both root and node_cache are protected by a cache_lock, but it's
|
|
|
|
needed when loading nodes and is not used when the whole graph is in memory.
|
|
|
|
Graph can be traversed concurrently by different threads, as traversal
|
|
|
|
changes neither nodes nor the ctx.
|
|
|
|
Nodes can be loaded concurrently by different threads, this is protected
|
|
|
|
by a partitioned node_lock.
|
|
|
|
reference counter allows flushing the graph without interrupting
|
|
|
|
concurrent searches.
|
|
|
|
MyISAM automatically gets exclusive write access because of the TL_WRITE,
|
|
|
|
but InnoDB has to use a dedicated ctx->commit_lock for that
|
|
|
|
*/
|
|
|
|
class MHNSW_Context : public Sql_alloc
|
2024-06-03 15:21:57 +02:00
|
|
|
{
|
mhnsw: inter-statement shared cache
* preserve the graph in memory between statements
* keep it in a TABLE_SHARE, available for concurrent searches
* nodes are generally read-only, walking the graph doesn't change them
* distance to target is cached, calculated only once
* SIMD-optimized bloom filter detects visited nodes
* nodes are stored in an array, not List, to better utilize bloom filter
* auto-adjusting heuristic to estimate the number of visited nodes
(to configure the bloom filter)
* many threads can concurrently walk the graph. MEM_ROOT and Hash_set
are protected with a mutex, but walking doesn't need them
* up to 8 threads can concurrently load nodes into the cache,
nodes are partitioned into 8 mutexes (8 is chosen arbitrarily, might
need tuning)
* concurrent editing is not supported though
* this is fine for MyISAM, TL_WRITE protects the TABLE_SHARE and the
graph (note that TL_WRITE_CONCURRENT_INSERT is not allowed, because an
INSERT into the main table means multiple UPDATEs in the graph)
* InnoDB uses secondary transaction-level caches linked in a list in
in thd->ha_data via a fake handlerton
* on rollback the secondary cache is discarded, on commit nodes
from the secondary cache are invalidated in the shared cache
while it is exclusively locked
* on savepoint rollback both caches are flushed. this can be improved
in the future with a row visibility callback
* graph size is controlled by @@mhnsw_cache_size, the cache is flushed
when it reaches the threshold
2024-07-17 17:16:28 +02:00
|
|
|
std::atomic<uint> refcnt{0};
|
|
|
|
mysql_mutex_t cache_lock;
|
|
|
|
mysql_mutex_t node_lock[8];
|
|
|
|
|
|
|
|
void cache_internal(FVectorNode *node)
|
|
|
|
{
|
|
|
|
DBUG_ASSERT(node->stored);
|
|
|
|
node_cache.insert(node);
|
|
|
|
}
|
|
|
|
void *alloc_node_internal()
|
|
|
|
{
|
|
|
|
return alloc_root(&root, sizeof(FVectorNode) + gref_len + tref_len
|
2024-07-19 12:25:25 +02:00
|
|
|
+ FVector::alloc_size(vec_len));
|
mhnsw: inter-statement shared cache
* preserve the graph in memory between statements
* keep it in a TABLE_SHARE, available for concurrent searches
* nodes are generally read-only, walking the graph doesn't change them
* distance to target is cached, calculated only once
* SIMD-optimized bloom filter detects visited nodes
* nodes are stored in an array, not List, to better utilize bloom filter
* auto-adjusting heuristic to estimate the number of visited nodes
(to configure the bloom filter)
* many threads can concurrently walk the graph. MEM_ROOT and Hash_set
are protected with a mutex, but walking doesn't need them
* up to 8 threads can concurrently load nodes into the cache,
nodes are partitioned into 8 mutexes (8 is chosen arbitrarily, might
need tuning)
* concurrent editing is not supported though
* this is fine for MyISAM, TL_WRITE protects the TABLE_SHARE and the
graph (note that TL_WRITE_CONCURRENT_INSERT is not allowed, because an
INSERT into the main table means multiple UPDATEs in the graph)
* InnoDB uses secondary transaction-level caches linked in a list in
in thd->ha_data via a fake handlerton
* on rollback the secondary cache is discarded, on commit nodes
from the secondary cache are invalidated in the shared cache
while it is exclusively locked
* on savepoint rollback both caches are flushed. this can be improved
in the future with a row visibility callback
* graph size is controlled by @@mhnsw_cache_size, the cache is flushed
when it reaches the threshold
2024-07-17 17:16:28 +02:00
|
|
|
}
|
|
|
|
|
|
|
|
protected:
|
2024-06-03 15:21:57 +02:00
|
|
|
MEM_ROOT root;
|
mhnsw: inter-statement shared cache
* preserve the graph in memory between statements
* keep it in a TABLE_SHARE, available for concurrent searches
* nodes are generally read-only, walking the graph doesn't change them
* distance to target is cached, calculated only once
* SIMD-optimized bloom filter detects visited nodes
* nodes are stored in an array, not List, to better utilize bloom filter
* auto-adjusting heuristic to estimate the number of visited nodes
(to configure the bloom filter)
* many threads can concurrently walk the graph. MEM_ROOT and Hash_set
are protected with a mutex, but walking doesn't need them
* up to 8 threads can concurrently load nodes into the cache,
nodes are partitioned into 8 mutexes (8 is chosen arbitrarily, might
need tuning)
* concurrent editing is not supported though
* this is fine for MyISAM, TL_WRITE protects the TABLE_SHARE and the
graph (note that TL_WRITE_CONCURRENT_INSERT is not allowed, because an
INSERT into the main table means multiple UPDATEs in the graph)
* InnoDB uses secondary transaction-level caches linked in a list in
in thd->ha_data via a fake handlerton
* on rollback the secondary cache is discarded, on commit nodes
from the secondary cache are invalidated in the shared cache
while it is exclusively locked
* on savepoint rollback both caches are flushed. this can be improved
in the future with a row visibility callback
* graph size is controlled by @@mhnsw_cache_size, the cache is flushed
when it reaches the threshold
2024-07-17 17:16:28 +02:00
|
|
|
Hash_set<FVectorNode> node_cache{PSI_INSTRUMENT_MEM, FVectorNode::get_key};
|
|
|
|
|
|
|
|
public:
|
|
|
|
mysql_rwlock_t commit_lock;
|
2024-06-04 14:47:52 +02:00
|
|
|
size_t vec_len= 0;
|
2024-06-06 16:39:45 +02:00
|
|
|
size_t byte_len= 0;
|
mhnsw: inter-statement shared cache
* preserve the graph in memory between statements
* keep it in a TABLE_SHARE, available for concurrent searches
* nodes are generally read-only, walking the graph doesn't change them
* distance to target is cached, calculated only once
* SIMD-optimized bloom filter detects visited nodes
* nodes are stored in an array, not List, to better utilize bloom filter
* auto-adjusting heuristic to estimate the number of visited nodes
(to configure the bloom filter)
* many threads can concurrently walk the graph. MEM_ROOT and Hash_set
are protected with a mutex, but walking doesn't need them
* up to 8 threads can concurrently load nodes into the cache,
nodes are partitioned into 8 mutexes (8 is chosen arbitrarily, might
need tuning)
* concurrent editing is not supported though
* this is fine for MyISAM, TL_WRITE protects the TABLE_SHARE and the
graph (note that TL_WRITE_CONCURRENT_INSERT is not allowed, because an
INSERT into the main table means multiple UPDATEs in the graph)
* InnoDB uses secondary transaction-level caches linked in a list in
in thd->ha_data via a fake handlerton
* on rollback the secondary cache is discarded, on commit nodes
from the secondary cache are invalidated in the shared cache
while it is exclusively locked
* on savepoint rollback both caches are flushed. this can be improved
in the future with a row visibility callback
* graph size is controlled by @@mhnsw_cache_size, the cache is flushed
when it reaches the threshold
2024-07-17 17:16:28 +02:00
|
|
|
Atomic_relaxed<double> ef_power{0.6}; // for the bloom filter size heuristic
|
|
|
|
FVectorNode *start= 0;
|
|
|
|
const uint tref_len;
|
|
|
|
const uint gref_len;
|
|
|
|
const uint M;
|
|
|
|
|
|
|
|
MHNSW_Context(TABLE *t)
|
|
|
|
: tref_len(t->file->ref_length),
|
|
|
|
gref_len(t->hlindex->file->ref_length),
|
|
|
|
M(t->in_use->variables.mhnsw_max_edges_per_node)
|
|
|
|
{
|
|
|
|
mysql_rwlock_init(PSI_INSTRUMENT_ME, &commit_lock);
|
|
|
|
mysql_mutex_init(PSI_INSTRUMENT_ME, &cache_lock, MY_MUTEX_INIT_FAST);
|
|
|
|
for (uint i=0; i < array_elements(node_lock); i++)
|
|
|
|
mysql_mutex_init(PSI_INSTRUMENT_ME, node_lock + i, MY_MUTEX_INIT_SLOW);
|
|
|
|
init_alloc_root(PSI_INSTRUMENT_MEM, &root, 1024*1024, 0, MYF(0));
|
|
|
|
}
|
2024-06-04 14:47:52 +02:00
|
|
|
|
mhnsw: inter-statement shared cache
* preserve the graph in memory between statements
* keep it in a TABLE_SHARE, available for concurrent searches
* nodes are generally read-only, walking the graph doesn't change them
* distance to target is cached, calculated only once
* SIMD-optimized bloom filter detects visited nodes
* nodes are stored in an array, not List, to better utilize bloom filter
* auto-adjusting heuristic to estimate the number of visited nodes
(to configure the bloom filter)
* many threads can concurrently walk the graph. MEM_ROOT and Hash_set
are protected with a mutex, but walking doesn't need them
* up to 8 threads can concurrently load nodes into the cache,
nodes are partitioned into 8 mutexes (8 is chosen arbitrarily, might
need tuning)
* concurrent editing is not supported though
* this is fine for MyISAM, TL_WRITE protects the TABLE_SHARE and the
graph (note that TL_WRITE_CONCURRENT_INSERT is not allowed, because an
INSERT into the main table means multiple UPDATEs in the graph)
* InnoDB uses secondary transaction-level caches linked in a list in
in thd->ha_data via a fake handlerton
* on rollback the secondary cache is discarded, on commit nodes
from the secondary cache are invalidated in the shared cache
while it is exclusively locked
* on savepoint rollback both caches are flushed. this can be improved
in the future with a row visibility callback
* graph size is controlled by @@mhnsw_cache_size, the cache is flushed
when it reaches the threshold
2024-07-17 17:16:28 +02:00
|
|
|
virtual ~MHNSW_Context()
|
|
|
|
{
|
|
|
|
free_root(&root, MYF(0));
|
|
|
|
mysql_rwlock_destroy(&commit_lock);
|
|
|
|
mysql_mutex_destroy(&cache_lock);
|
|
|
|
for (size_t i=0; i < array_elements(node_lock); i++)
|
|
|
|
mysql_mutex_destroy(node_lock + i);
|
|
|
|
}
|
2024-06-03 15:21:57 +02:00
|
|
|
|
mhnsw: inter-statement shared cache
* preserve the graph in memory between statements
* keep it in a TABLE_SHARE, available for concurrent searches
* nodes are generally read-only, walking the graph doesn't change them
* distance to target is cached, calculated only once
* SIMD-optimized bloom filter detects visited nodes
* nodes are stored in an array, not List, to better utilize bloom filter
* auto-adjusting heuristic to estimate the number of visited nodes
(to configure the bloom filter)
* many threads can concurrently walk the graph. MEM_ROOT and Hash_set
are protected with a mutex, but walking doesn't need them
* up to 8 threads can concurrently load nodes into the cache,
nodes are partitioned into 8 mutexes (8 is chosen arbitrarily, might
need tuning)
* concurrent editing is not supported though
* this is fine for MyISAM, TL_WRITE protects the TABLE_SHARE and the
graph (note that TL_WRITE_CONCURRENT_INSERT is not allowed, because an
INSERT into the main table means multiple UPDATEs in the graph)
* InnoDB uses secondary transaction-level caches linked in a list in
in thd->ha_data via a fake handlerton
* on rollback the secondary cache is discarded, on commit nodes
from the secondary cache are invalidated in the shared cache
while it is exclusively locked
* on savepoint rollback both caches are flushed. this can be improved
in the future with a row visibility callback
* graph size is controlled by @@mhnsw_cache_size, the cache is flushed
when it reaches the threshold
2024-07-17 17:16:28 +02:00
|
|
|
uint lock_node(FVectorNode *ptr)
|
2024-06-03 15:21:57 +02:00
|
|
|
{
|
mhnsw: inter-statement shared cache
* preserve the graph in memory between statements
* keep it in a TABLE_SHARE, available for concurrent searches
* nodes are generally read-only, walking the graph doesn't change them
* distance to target is cached, calculated only once
* SIMD-optimized bloom filter detects visited nodes
* nodes are stored in an array, not List, to better utilize bloom filter
* auto-adjusting heuristic to estimate the number of visited nodes
(to configure the bloom filter)
* many threads can concurrently walk the graph. MEM_ROOT and Hash_set
are protected with a mutex, but walking doesn't need them
* up to 8 threads can concurrently load nodes into the cache,
nodes are partitioned into 8 mutexes (8 is chosen arbitrarily, might
need tuning)
* concurrent editing is not supported though
* this is fine for MyISAM, TL_WRITE protects the TABLE_SHARE and the
graph (note that TL_WRITE_CONCURRENT_INSERT is not allowed, because an
INSERT into the main table means multiple UPDATEs in the graph)
* InnoDB uses secondary transaction-level caches linked in a list in
in thd->ha_data via a fake handlerton
* on rollback the secondary cache is discarded, on commit nodes
from the secondary cache are invalidated in the shared cache
while it is exclusively locked
* on savepoint rollback both caches are flushed. this can be improved
in the future with a row visibility callback
* graph size is controlled by @@mhnsw_cache_size, the cache is flushed
when it reaches the threshold
2024-07-17 17:16:28 +02:00
|
|
|
ulong nr1= 1, nr2= 4;
|
|
|
|
my_hash_sort_bin(0, (uchar*)&ptr, sizeof(ptr), &nr1, &nr2);
|
|
|
|
uint ticket= nr1 % array_elements(node_lock);
|
|
|
|
mysql_mutex_lock(node_lock + ticket);
|
|
|
|
return ticket;
|
2024-06-03 15:21:57 +02:00
|
|
|
}
|
|
|
|
|
mhnsw: inter-statement shared cache
* preserve the graph in memory between statements
* keep it in a TABLE_SHARE, available for concurrent searches
* nodes are generally read-only, walking the graph doesn't change them
* distance to target is cached, calculated only once
* SIMD-optimized bloom filter detects visited nodes
* nodes are stored in an array, not List, to better utilize bloom filter
* auto-adjusting heuristic to estimate the number of visited nodes
(to configure the bloom filter)
* many threads can concurrently walk the graph. MEM_ROOT and Hash_set
are protected with a mutex, but walking doesn't need them
* up to 8 threads can concurrently load nodes into the cache,
nodes are partitioned into 8 mutexes (8 is chosen arbitrarily, might
need tuning)
* concurrent editing is not supported though
* this is fine for MyISAM, TL_WRITE protects the TABLE_SHARE and the
graph (note that TL_WRITE_CONCURRENT_INSERT is not allowed, because an
INSERT into the main table means multiple UPDATEs in the graph)
* InnoDB uses secondary transaction-level caches linked in a list in
in thd->ha_data via a fake handlerton
* on rollback the secondary cache is discarded, on commit nodes
from the secondary cache are invalidated in the shared cache
while it is exclusively locked
* on savepoint rollback both caches are flushed. this can be improved
in the future with a row visibility callback
* graph size is controlled by @@mhnsw_cache_size, the cache is flushed
when it reaches the threshold
2024-07-17 17:16:28 +02:00
|
|
|
void unlock_node(uint ticket)
|
2024-06-03 15:21:57 +02:00
|
|
|
{
|
mhnsw: inter-statement shared cache
* preserve the graph in memory between statements
* keep it in a TABLE_SHARE, available for concurrent searches
* nodes are generally read-only, walking the graph doesn't change them
* distance to target is cached, calculated only once
* SIMD-optimized bloom filter detects visited nodes
* nodes are stored in an array, not List, to better utilize bloom filter
* auto-adjusting heuristic to estimate the number of visited nodes
(to configure the bloom filter)
* many threads can concurrently walk the graph. MEM_ROOT and Hash_set
are protected with a mutex, but walking doesn't need them
* up to 8 threads can concurrently load nodes into the cache,
nodes are partitioned into 8 mutexes (8 is chosen arbitrarily, might
need tuning)
* concurrent editing is not supported though
* this is fine for MyISAM, TL_WRITE protects the TABLE_SHARE and the
graph (note that TL_WRITE_CONCURRENT_INSERT is not allowed, because an
INSERT into the main table means multiple UPDATEs in the graph)
* InnoDB uses secondary transaction-level caches linked in a list in
in thd->ha_data via a fake handlerton
* on rollback the secondary cache is discarded, on commit nodes
from the secondary cache are invalidated in the shared cache
while it is exclusively locked
* on savepoint rollback both caches are flushed. this can be improved
in the future with a row visibility callback
* graph size is controlled by @@mhnsw_cache_size, the cache is flushed
when it reaches the threshold
2024-07-17 17:16:28 +02:00
|
|
|
mysql_mutex_unlock(node_lock + ticket);
|
|
|
|
}
|
|
|
|
|
|
|
|
uint max_neighbors(size_t layer) const
|
|
|
|
{
|
|
|
|
return (layer ? 1 : 2) * M; // heuristic from the paper
|
2024-06-03 15:21:57 +02:00
|
|
|
}
|
Initial HNSW implementation
This commit includes the work done in collaboration with Hugo Wen from
Amazon:
MDEV-33408 Alter HNSW graph storage and fix memory leak
This commit changes the way HNSW graph information is stored in the
second table. Instead of storing connections as separate records, it now
stores neighbors for each node, leading to significant performance
improvements and storage savings.
Comparing with the previous approach, the insert speed is 5 times faster,
search speed improves by 23%, and storage usage is reduced by 73%, based
on ann-benchmark tests with random-xs-20-euclidean and
random-s-100-euclidean datasets.
Additionally, in previous code, vector objects were not released after
use, resulting in excessive memory consumption (over 20GB for building
the index with 90,000 records), preventing tests with large datasets.
Now ensure that vectors are released appropriately during the insert and
search functions. Note there are still some vectors that need to be
cleaned up after search query completion. Needs to be addressed in a
future commit.
All new code of the whole pull request, including one or several files
that are either new files or modified ones, are contributed under the
BSD-new license. I am contributing on behalf of my employer Amazon Web
Services, Inc.
As well as the commit:
Introduce session variables to manage HNSW index parameters
Three variables:
hnsw_max_connection_per_layer
hnsw_ef_constructor
hnsw_ef_search
ann-benchmark tool is also updated to support these variables in commit
https://github.com/HugoWenTD/ann-benchmarks/commit/e09784e for branch
https://github.com/HugoWenTD/ann-benchmarks/tree/mariadb-configurable
All new code of the whole pull request, including one or several files
that are either new files or modified ones, are contributed under the
BSD-new license. I am contributing on behalf of my employer Amazon Web
Services, Inc.
Co-authored-by: Hugo Wen <wenhug@amazon.com>
2024-02-17 17:03:30 +02:00
|
|
|
|
2024-06-13 23:24:51 +02:00
|
|
|
void set_lengths(size_t len)
|
|
|
|
{
|
|
|
|
byte_len= len;
|
2024-07-19 12:25:25 +02:00
|
|
|
vec_len= len / sizeof(float);
|
2024-06-13 23:24:51 +02:00
|
|
|
}
|
mhnsw: inter-statement shared cache
* preserve the graph in memory between statements
* keep it in a TABLE_SHARE, available for concurrent searches
* nodes are generally read-only, walking the graph doesn't change them
* distance to target is cached, calculated only once
* SIMD-optimized bloom filter detects visited nodes
* nodes are stored in an array, not List, to better utilize bloom filter
* auto-adjusting heuristic to estimate the number of visited nodes
(to configure the bloom filter)
* many threads can concurrently walk the graph. MEM_ROOT and Hash_set
are protected with a mutex, but walking doesn't need them
* up to 8 threads can concurrently load nodes into the cache,
nodes are partitioned into 8 mutexes (8 is chosen arbitrarily, might
need tuning)
* concurrent editing is not supported though
* this is fine for MyISAM, TL_WRITE protects the TABLE_SHARE and the
graph (note that TL_WRITE_CONCURRENT_INSERT is not allowed, because an
INSERT into the main table means multiple UPDATEs in the graph)
* InnoDB uses secondary transaction-level caches linked in a list in
in thd->ha_data via a fake handlerton
* on rollback the secondary cache is discarded, on commit nodes
from the secondary cache are invalidated in the shared cache
while it is exclusively locked
* on savepoint rollback both caches are flushed. this can be improved
in the future with a row visibility callback
* graph size is controlled by @@mhnsw_cache_size, the cache is flushed
when it reaches the threshold
2024-07-17 17:16:28 +02:00
|
|
|
|
|
|
|
static int acquire(MHNSW_Context **ctx, TABLE *table, bool for_update);
|
|
|
|
static MHNSW_Context *get_from_share(TABLE_SHARE *share, TABLE *table);
|
|
|
|
|
2024-07-18 14:43:47 +02:00
|
|
|
virtual void reset(TABLE_SHARE *share)
|
mhnsw: inter-statement shared cache
* preserve the graph in memory between statements
* keep it in a TABLE_SHARE, available for concurrent searches
* nodes are generally read-only, walking the graph doesn't change them
* distance to target is cached, calculated only once
* SIMD-optimized bloom filter detects visited nodes
* nodes are stored in an array, not List, to better utilize bloom filter
* auto-adjusting heuristic to estimate the number of visited nodes
(to configure the bloom filter)
* many threads can concurrently walk the graph. MEM_ROOT and Hash_set
are protected with a mutex, but walking doesn't need them
* up to 8 threads can concurrently load nodes into the cache,
nodes are partitioned into 8 mutexes (8 is chosen arbitrarily, might
need tuning)
* concurrent editing is not supported though
* this is fine for MyISAM, TL_WRITE protects the TABLE_SHARE and the
graph (note that TL_WRITE_CONCURRENT_INSERT is not allowed, because an
INSERT into the main table means multiple UPDATEs in the graph)
* InnoDB uses secondary transaction-level caches linked in a list in
in thd->ha_data via a fake handlerton
* on rollback the secondary cache is discarded, on commit nodes
from the secondary cache are invalidated in the shared cache
while it is exclusively locked
* on savepoint rollback both caches are flushed. this can be improved
in the future with a row visibility callback
* graph size is controlled by @@mhnsw_cache_size, the cache is flushed
when it reaches the threshold
2024-07-17 17:16:28 +02:00
|
|
|
{
|
|
|
|
mysql_mutex_lock(&share->LOCK_share);
|
|
|
|
if (static_cast<MHNSW_Context*>(share->hlindex->hlindex_data) == this)
|
|
|
|
{
|
|
|
|
share->hlindex->hlindex_data= nullptr;
|
|
|
|
--refcnt;
|
|
|
|
}
|
|
|
|
mysql_mutex_unlock(&share->LOCK_share);
|
|
|
|
}
|
|
|
|
|
|
|
|
void release(TABLE *table)
|
|
|
|
{
|
|
|
|
return release(table->file->has_transactions(), table->s);
|
|
|
|
}
|
|
|
|
|
|
|
|
virtual void release(bool can_commit, TABLE_SHARE *share)
|
|
|
|
{
|
|
|
|
if (can_commit)
|
|
|
|
mysql_rwlock_unlock(&commit_lock);
|
|
|
|
if (root_size(&root) > mhnsw_cache_size)
|
2024-07-18 14:43:47 +02:00
|
|
|
reset(share);
|
mhnsw: inter-statement shared cache
* preserve the graph in memory between statements
* keep it in a TABLE_SHARE, available for concurrent searches
* nodes are generally read-only, walking the graph doesn't change them
* distance to target is cached, calculated only once
* SIMD-optimized bloom filter detects visited nodes
* nodes are stored in an array, not List, to better utilize bloom filter
* auto-adjusting heuristic to estimate the number of visited nodes
(to configure the bloom filter)
* many threads can concurrently walk the graph. MEM_ROOT and Hash_set
are protected with a mutex, but walking doesn't need them
* up to 8 threads can concurrently load nodes into the cache,
nodes are partitioned into 8 mutexes (8 is chosen arbitrarily, might
need tuning)
* concurrent editing is not supported though
* this is fine for MyISAM, TL_WRITE protects the TABLE_SHARE and the
graph (note that TL_WRITE_CONCURRENT_INSERT is not allowed, because an
INSERT into the main table means multiple UPDATEs in the graph)
* InnoDB uses secondary transaction-level caches linked in a list in
in thd->ha_data via a fake handlerton
* on rollback the secondary cache is discarded, on commit nodes
from the secondary cache are invalidated in the shared cache
while it is exclusively locked
* on savepoint rollback both caches are flushed. this can be improved
in the future with a row visibility callback
* graph size is controlled by @@mhnsw_cache_size, the cache is flushed
when it reaches the threshold
2024-07-17 17:16:28 +02:00
|
|
|
if (--refcnt == 0)
|
|
|
|
this->~MHNSW_Context(); // XXX reuse
|
|
|
|
}
|
|
|
|
|
|
|
|
FVectorNode *get_node(const void *gref)
|
|
|
|
{
|
|
|
|
mysql_mutex_lock(&cache_lock);
|
|
|
|
FVectorNode *node= node_cache.find(gref, gref_len);
|
|
|
|
if (!node)
|
|
|
|
{
|
|
|
|
node= new (alloc_node_internal()) FVectorNode(this, gref);
|
|
|
|
cache_internal(node);
|
|
|
|
}
|
|
|
|
mysql_mutex_unlock(&cache_lock);
|
|
|
|
return node;
|
|
|
|
}
|
|
|
|
|
|
|
|
/* used on INSERT, gref isn't known, so cannot cache the node yet */
|
|
|
|
void *alloc_node()
|
|
|
|
{
|
|
|
|
mysql_mutex_lock(&cache_lock);
|
|
|
|
auto p= alloc_node_internal();
|
|
|
|
mysql_mutex_unlock(&cache_lock);
|
|
|
|
return p;
|
|
|
|
}
|
|
|
|
|
|
|
|
/* explicitly cache the node after alloc_node() */
|
|
|
|
void cache_node(FVectorNode *node)
|
|
|
|
{
|
|
|
|
mysql_mutex_lock(&cache_lock);
|
|
|
|
cache_internal(node);
|
|
|
|
mysql_mutex_unlock(&cache_lock);
|
|
|
|
}
|
|
|
|
|
|
|
|
/* find the node without creating, only used on merging trx->ctx */
|
|
|
|
FVectorNode *find_node(const void *gref)
|
|
|
|
{
|
|
|
|
mysql_mutex_lock(&cache_lock);
|
|
|
|
FVectorNode *node= node_cache.find(gref, gref_len);
|
|
|
|
mysql_mutex_unlock(&cache_lock);
|
|
|
|
return node;
|
|
|
|
}
|
|
|
|
|
|
|
|
void *alloc_neighborhood(size_t max_layer)
|
|
|
|
{
|
|
|
|
mysql_mutex_lock(&cache_lock);
|
|
|
|
auto p= alloc_root(&root, sizeof(Neighborhood)*(max_layer+1) +
|
|
|
|
sizeof(FVectorNode*)*(MY_ALIGN(M, 4)*2 + MY_ALIGN(M,8)*max_layer));
|
|
|
|
mysql_mutex_unlock(&cache_lock);
|
|
|
|
return p;
|
|
|
|
}
|
2024-06-04 14:47:52 +02:00
|
|
|
};
|
Initial HNSW implementation
This commit includes the work done in collaboration with Hugo Wen from
Amazon:
MDEV-33408 Alter HNSW graph storage and fix memory leak
This commit changes the way HNSW graph information is stored in the
second table. Instead of storing connections as separate records, it now
stores neighbors for each node, leading to significant performance
improvements and storage savings.
Comparing with the previous approach, the insert speed is 5 times faster,
search speed improves by 23%, and storage usage is reduced by 73%, based
on ann-benchmark tests with random-xs-20-euclidean and
random-s-100-euclidean datasets.
Additionally, in previous code, vector objects were not released after
use, resulting in excessive memory consumption (over 20GB for building
the index with 90,000 records), preventing tests with large datasets.
Now ensure that vectors are released appropriately during the insert and
search functions. Note there are still some vectors that need to be
cleaned up after search query completion. Needs to be addressed in a
future commit.
All new code of the whole pull request, including one or several files
that are either new files or modified ones, are contributed under the
BSD-new license. I am contributing on behalf of my employer Amazon Web
Services, Inc.
As well as the commit:
Introduce session variables to manage HNSW index parameters
Three variables:
hnsw_max_connection_per_layer
hnsw_ef_constructor
hnsw_ef_search
ann-benchmark tool is also updated to support these variables in commit
https://github.com/HugoWenTD/ann-benchmarks/commit/e09784e for branch
https://github.com/HugoWenTD/ann-benchmarks/tree/mariadb-configurable
All new code of the whole pull request, including one or several files
that are either new files or modified ones, are contributed under the
BSD-new license. I am contributing on behalf of my employer Amazon Web
Services, Inc.
Co-authored-by: Hugo Wen <wenhug@amazon.com>
2024-02-17 17:03:30 +02:00
|
|
|
|
mhnsw: inter-statement shared cache
* preserve the graph in memory between statements
* keep it in a TABLE_SHARE, available for concurrent searches
* nodes are generally read-only, walking the graph doesn't change them
* distance to target is cached, calculated only once
* SIMD-optimized bloom filter detects visited nodes
* nodes are stored in an array, not List, to better utilize bloom filter
* auto-adjusting heuristic to estimate the number of visited nodes
(to configure the bloom filter)
* many threads can concurrently walk the graph. MEM_ROOT and Hash_set
are protected with a mutex, but walking doesn't need them
* up to 8 threads can concurrently load nodes into the cache,
nodes are partitioned into 8 mutexes (8 is chosen arbitrarily, might
need tuning)
* concurrent editing is not supported though
* this is fine for MyISAM, TL_WRITE protects the TABLE_SHARE and the
graph (note that TL_WRITE_CONCURRENT_INSERT is not allowed, because an
INSERT into the main table means multiple UPDATEs in the graph)
* InnoDB uses secondary transaction-level caches linked in a list in
in thd->ha_data via a fake handlerton
* on rollback the secondary cache is discarded, on commit nodes
from the secondary cache are invalidated in the shared cache
while it is exclusively locked
* on savepoint rollback both caches are flushed. this can be improved
in the future with a row visibility callback
* graph size is controlled by @@mhnsw_cache_size, the cache is flushed
when it reaches the threshold
2024-07-17 17:16:28 +02:00
|
|
|
/*
|
|
|
|
This is a non-shared context that exists within one transaction.
|
|
|
|
|
|
|
|
At the end of the transaction it's either discarded (on rollback)
|
|
|
|
or merged into the shared ctx (on commit).
|
|
|
|
|
|
|
|
trx's are stored in thd->ha_data[] in a single-linked list,
|
|
|
|
one instance of trx per TABLE_SHARE and allocated on the
|
|
|
|
thd->transaction->mem_root
|
|
|
|
*/
|
|
|
|
class MHNSW_Trx : public MHNSW_Context
|
|
|
|
{
|
|
|
|
public:
|
|
|
|
TABLE_SHARE *table_share;
|
|
|
|
bool list_of_nodes_is_lost= false;
|
|
|
|
MHNSW_Trx *next= nullptr;
|
|
|
|
|
|
|
|
MHNSW_Trx(TABLE *table) : MHNSW_Context(table), table_share(table->s) {}
|
2024-07-18 14:43:47 +02:00
|
|
|
void reset(TABLE_SHARE *) override
|
mhnsw: inter-statement shared cache
* preserve the graph in memory between statements
* keep it in a TABLE_SHARE, available for concurrent searches
* nodes are generally read-only, walking the graph doesn't change them
* distance to target is cached, calculated only once
* SIMD-optimized bloom filter detects visited nodes
* nodes are stored in an array, not List, to better utilize bloom filter
* auto-adjusting heuristic to estimate the number of visited nodes
(to configure the bloom filter)
* many threads can concurrently walk the graph. MEM_ROOT and Hash_set
are protected with a mutex, but walking doesn't need them
* up to 8 threads can concurrently load nodes into the cache,
nodes are partitioned into 8 mutexes (8 is chosen arbitrarily, might
need tuning)
* concurrent editing is not supported though
* this is fine for MyISAM, TL_WRITE protects the TABLE_SHARE and the
graph (note that TL_WRITE_CONCURRENT_INSERT is not allowed, because an
INSERT into the main table means multiple UPDATEs in the graph)
* InnoDB uses secondary transaction-level caches linked in a list in
in thd->ha_data via a fake handlerton
* on rollback the secondary cache is discarded, on commit nodes
from the secondary cache are invalidated in the shared cache
while it is exclusively locked
* on savepoint rollback both caches are flushed. this can be improved
in the future with a row visibility callback
* graph size is controlled by @@mhnsw_cache_size, the cache is flushed
when it reaches the threshold
2024-07-17 17:16:28 +02:00
|
|
|
{
|
|
|
|
node_cache.clear();
|
|
|
|
free_root(&root, MYF(0));
|
|
|
|
start= 0;
|
|
|
|
list_of_nodes_is_lost= true;
|
|
|
|
}
|
|
|
|
void release(bool, TABLE_SHARE *) override
|
|
|
|
{
|
|
|
|
if (root_size(&root) > mhnsw_cache_size)
|
2024-07-18 14:43:47 +02:00
|
|
|
reset(nullptr);
|
mhnsw: inter-statement shared cache
* preserve the graph in memory between statements
* keep it in a TABLE_SHARE, available for concurrent searches
* nodes are generally read-only, walking the graph doesn't change them
* distance to target is cached, calculated only once
* SIMD-optimized bloom filter detects visited nodes
* nodes are stored in an array, not List, to better utilize bloom filter
* auto-adjusting heuristic to estimate the number of visited nodes
(to configure the bloom filter)
* many threads can concurrently walk the graph. MEM_ROOT and Hash_set
are protected with a mutex, but walking doesn't need them
* up to 8 threads can concurrently load nodes into the cache,
nodes are partitioned into 8 mutexes (8 is chosen arbitrarily, might
need tuning)
* concurrent editing is not supported though
* this is fine for MyISAM, TL_WRITE protects the TABLE_SHARE and the
graph (note that TL_WRITE_CONCURRENT_INSERT is not allowed, because an
INSERT into the main table means multiple UPDATEs in the graph)
* InnoDB uses secondary transaction-level caches linked in a list in
in thd->ha_data via a fake handlerton
* on rollback the secondary cache is discarded, on commit nodes
from the secondary cache are invalidated in the shared cache
while it is exclusively locked
* on savepoint rollback both caches are flushed. this can be improved
in the future with a row visibility callback
* graph size is controlled by @@mhnsw_cache_size, the cache is flushed
when it reaches the threshold
2024-07-17 17:16:28 +02:00
|
|
|
}
|
|
|
|
|
|
|
|
static MHNSW_Trx *get_from_thd(THD *thd, TABLE *table);
|
|
|
|
|
|
|
|
// it's okay in a transaction-local cache, there's no concurrent access
|
|
|
|
Hash_set<FVectorNode> &get_cache() { return node_cache; }
|
|
|
|
|
|
|
|
/* fake handlerton to use thd->ha_data and to get notified of commits */
|
|
|
|
static struct MHNSW_hton : public handlerton
|
|
|
|
{
|
|
|
|
MHNSW_hton()
|
|
|
|
{
|
|
|
|
db_type= DB_TYPE_HLINDEX_HELPER;
|
|
|
|
flags = HTON_NOT_USER_SELECTABLE | HTON_HIDDEN;
|
|
|
|
savepoint_offset= 0;
|
|
|
|
savepoint_set= [](handlerton *, THD *, void *){ return 0; };
|
|
|
|
savepoint_rollback_can_release_mdl= [](handlerton *, THD *){ return true; };
|
|
|
|
savepoint_rollback= do_savepoint_rollback;
|
|
|
|
commit= do_commit;
|
|
|
|
rollback= do_rollback;
|
|
|
|
}
|
|
|
|
static int do_commit(handlerton *, THD *thd, bool);
|
|
|
|
static int do_rollback(handlerton *, THD *thd, bool);
|
|
|
|
static int do_savepoint_rollback(handlerton *, THD *thd, void *);
|
|
|
|
} hton;
|
|
|
|
};
|
|
|
|
|
|
|
|
MHNSW_Trx::MHNSW_hton MHNSW_Trx::hton;
|
|
|
|
|
|
|
|
int MHNSW_Trx::MHNSW_hton::do_savepoint_rollback(handlerton *, THD *thd, void *)
|
|
|
|
{
|
|
|
|
for (auto trx= static_cast<MHNSW_Trx*>(thd_get_ha_data(thd, &hton));
|
|
|
|
trx; trx= trx->next)
|
2024-07-18 14:43:47 +02:00
|
|
|
trx->reset(nullptr);
|
mhnsw: inter-statement shared cache
* preserve the graph in memory between statements
* keep it in a TABLE_SHARE, available for concurrent searches
* nodes are generally read-only, walking the graph doesn't change them
* distance to target is cached, calculated only once
* SIMD-optimized bloom filter detects visited nodes
* nodes are stored in an array, not List, to better utilize bloom filter
* auto-adjusting heuristic to estimate the number of visited nodes
(to configure the bloom filter)
* many threads can concurrently walk the graph. MEM_ROOT and Hash_set
are protected with a mutex, but walking doesn't need them
* up to 8 threads can concurrently load nodes into the cache,
nodes are partitioned into 8 mutexes (8 is chosen arbitrarily, might
need tuning)
* concurrent editing is not supported though
* this is fine for MyISAM, TL_WRITE protects the TABLE_SHARE and the
graph (note that TL_WRITE_CONCURRENT_INSERT is not allowed, because an
INSERT into the main table means multiple UPDATEs in the graph)
* InnoDB uses secondary transaction-level caches linked in a list in
in thd->ha_data via a fake handlerton
* on rollback the secondary cache is discarded, on commit nodes
from the secondary cache are invalidated in the shared cache
while it is exclusively locked
* on savepoint rollback both caches are flushed. this can be improved
in the future with a row visibility callback
* graph size is controlled by @@mhnsw_cache_size, the cache is flushed
when it reaches the threshold
2024-07-17 17:16:28 +02:00
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
int MHNSW_Trx::MHNSW_hton::do_rollback(handlerton *, THD *thd, bool)
|
|
|
|
{
|
|
|
|
MHNSW_Trx *trx_next;
|
|
|
|
for (auto trx= static_cast<MHNSW_Trx*>(thd_get_ha_data(thd, &hton));
|
|
|
|
trx; trx= trx_next)
|
|
|
|
{
|
|
|
|
trx_next= trx->next;
|
|
|
|
trx->~MHNSW_Trx();
|
|
|
|
}
|
|
|
|
thd_set_ha_data(current_thd, &hton, nullptr);
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
int MHNSW_Trx::MHNSW_hton::do_commit(handlerton *, THD *thd, bool)
|
|
|
|
{
|
|
|
|
MHNSW_Trx *trx_next;
|
|
|
|
for (auto trx= static_cast<MHNSW_Trx*>(thd_get_ha_data(thd, &hton));
|
|
|
|
trx; trx= trx_next)
|
|
|
|
{
|
|
|
|
trx_next= trx->next;
|
|
|
|
auto ctx= MHNSW_Context::get_from_share(trx->table_share, nullptr);
|
|
|
|
if (ctx)
|
|
|
|
{
|
|
|
|
mysql_rwlock_wrlock(&ctx->commit_lock);
|
|
|
|
if (trx->list_of_nodes_is_lost)
|
2024-07-18 14:43:47 +02:00
|
|
|
ctx->reset(trx->table_share);
|
mhnsw: inter-statement shared cache
* preserve the graph in memory between statements
* keep it in a TABLE_SHARE, available for concurrent searches
* nodes are generally read-only, walking the graph doesn't change them
* distance to target is cached, calculated only once
* SIMD-optimized bloom filter detects visited nodes
* nodes are stored in an array, not List, to better utilize bloom filter
* auto-adjusting heuristic to estimate the number of visited nodes
(to configure the bloom filter)
* many threads can concurrently walk the graph. MEM_ROOT and Hash_set
are protected with a mutex, but walking doesn't need them
* up to 8 threads can concurrently load nodes into the cache,
nodes are partitioned into 8 mutexes (8 is chosen arbitrarily, might
need tuning)
* concurrent editing is not supported though
* this is fine for MyISAM, TL_WRITE protects the TABLE_SHARE and the
graph (note that TL_WRITE_CONCURRENT_INSERT is not allowed, because an
INSERT into the main table means multiple UPDATEs in the graph)
* InnoDB uses secondary transaction-level caches linked in a list in
in thd->ha_data via a fake handlerton
* on rollback the secondary cache is discarded, on commit nodes
from the secondary cache are invalidated in the shared cache
while it is exclusively locked
* on savepoint rollback both caches are flushed. this can be improved
in the future with a row visibility callback
* graph size is controlled by @@mhnsw_cache_size, the cache is flushed
when it reaches the threshold
2024-07-17 17:16:28 +02:00
|
|
|
else
|
|
|
|
{
|
|
|
|
// consider copying nodes from trx to shared cache when it makes sense
|
|
|
|
// for ann_benchmarks it does not
|
|
|
|
// also, consider flushing only changed nodes (a flag in the node)
|
|
|
|
for (FVectorNode &from : trx->get_cache())
|
|
|
|
if (FVectorNode *node= ctx->find_node(from.gref()))
|
|
|
|
node->vec= nullptr;
|
|
|
|
ctx->start= nullptr;
|
|
|
|
}
|
|
|
|
ctx->release(true, trx->table_share);
|
|
|
|
}
|
|
|
|
trx->~MHNSW_Trx();
|
|
|
|
}
|
|
|
|
thd_set_ha_data(current_thd, &hton, nullptr);
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
MHNSW_Trx *MHNSW_Trx::get_from_thd(THD *thd, TABLE *table)
|
|
|
|
{
|
|
|
|
auto trx= static_cast<MHNSW_Trx*>(thd_get_ha_data(thd, &hton));
|
|
|
|
while (trx && trx->table_share != table->s) trx= trx->next;
|
|
|
|
if (!trx)
|
|
|
|
{
|
|
|
|
trx= new (&thd->transaction->mem_root) MHNSW_Trx(table);
|
|
|
|
trx->next= static_cast<MHNSW_Trx*>(thd_get_ha_data(thd, &hton));
|
|
|
|
thd_set_ha_data(thd, &hton, trx);
|
|
|
|
if (!trx->next)
|
|
|
|
{
|
|
|
|
bool all= thd_test_options(thd, OPTION_NOT_AUTOCOMMIT | OPTION_BEGIN);
|
|
|
|
trans_register_ha(thd, all, &hton, 0);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
return trx;
|
|
|
|
}
|
|
|
|
|
|
|
|
MHNSW_Context *MHNSW_Context::get_from_share(TABLE_SHARE *share, TABLE *table)
|
2024-06-04 14:47:52 +02:00
|
|
|
{
|
mhnsw: inter-statement shared cache
* preserve the graph in memory between statements
* keep it in a TABLE_SHARE, available for concurrent searches
* nodes are generally read-only, walking the graph doesn't change them
* distance to target is cached, calculated only once
* SIMD-optimized bloom filter detects visited nodes
* nodes are stored in an array, not List, to better utilize bloom filter
* auto-adjusting heuristic to estimate the number of visited nodes
(to configure the bloom filter)
* many threads can concurrently walk the graph. MEM_ROOT and Hash_set
are protected with a mutex, but walking doesn't need them
* up to 8 threads can concurrently load nodes into the cache,
nodes are partitioned into 8 mutexes (8 is chosen arbitrarily, might
need tuning)
* concurrent editing is not supported though
* this is fine for MyISAM, TL_WRITE protects the TABLE_SHARE and the
graph (note that TL_WRITE_CONCURRENT_INSERT is not allowed, because an
INSERT into the main table means multiple UPDATEs in the graph)
* InnoDB uses secondary transaction-level caches linked in a list in
in thd->ha_data via a fake handlerton
* on rollback the secondary cache is discarded, on commit nodes
from the secondary cache are invalidated in the shared cache
while it is exclusively locked
* on savepoint rollback both caches are flushed. this can be improved
in the future with a row visibility callback
* graph size is controlled by @@mhnsw_cache_size, the cache is flushed
when it reaches the threshold
2024-07-17 17:16:28 +02:00
|
|
|
mysql_mutex_lock(&share->LOCK_share);
|
|
|
|
auto ctx= static_cast<MHNSW_Context*>(share->hlindex->hlindex_data);
|
|
|
|
if (!ctx && table)
|
|
|
|
{
|
|
|
|
ctx= new (&share->hlindex->mem_root) MHNSW_Context(table);
|
|
|
|
if (!ctx) return nullptr;
|
|
|
|
share->hlindex->hlindex_data= ctx;
|
|
|
|
ctx->refcnt++;
|
|
|
|
}
|
|
|
|
if (ctx)
|
|
|
|
ctx->refcnt++;
|
|
|
|
mysql_mutex_unlock(&share->LOCK_share);
|
|
|
|
return ctx;
|
2024-06-06 16:39:45 +02:00
|
|
|
}
|
|
|
|
|
mhnsw: inter-statement shared cache
* preserve the graph in memory between statements
* keep it in a TABLE_SHARE, available for concurrent searches
* nodes are generally read-only, walking the graph doesn't change them
* distance to target is cached, calculated only once
* SIMD-optimized bloom filter detects visited nodes
* nodes are stored in an array, not List, to better utilize bloom filter
* auto-adjusting heuristic to estimate the number of visited nodes
(to configure the bloom filter)
* many threads can concurrently walk the graph. MEM_ROOT and Hash_set
are protected with a mutex, but walking doesn't need them
* up to 8 threads can concurrently load nodes into the cache,
nodes are partitioned into 8 mutexes (8 is chosen arbitrarily, might
need tuning)
* concurrent editing is not supported though
* this is fine for MyISAM, TL_WRITE protects the TABLE_SHARE and the
graph (note that TL_WRITE_CONCURRENT_INSERT is not allowed, because an
INSERT into the main table means multiple UPDATEs in the graph)
* InnoDB uses secondary transaction-level caches linked in a list in
in thd->ha_data via a fake handlerton
* on rollback the secondary cache is discarded, on commit nodes
from the secondary cache are invalidated in the shared cache
while it is exclusively locked
* on savepoint rollback both caches are flushed. this can be improved
in the future with a row visibility callback
* graph size is controlled by @@mhnsw_cache_size, the cache is flushed
when it reaches the threshold
2024-07-17 17:16:28 +02:00
|
|
|
int MHNSW_Context::acquire(MHNSW_Context **ctx, TABLE *table, bool for_update)
|
2024-06-06 16:39:45 +02:00
|
|
|
{
|
mhnsw: inter-statement shared cache
* preserve the graph in memory between statements
* keep it in a TABLE_SHARE, available for concurrent searches
* nodes are generally read-only, walking the graph doesn't change them
* distance to target is cached, calculated only once
* SIMD-optimized bloom filter detects visited nodes
* nodes are stored in an array, not List, to better utilize bloom filter
* auto-adjusting heuristic to estimate the number of visited nodes
(to configure the bloom filter)
* many threads can concurrently walk the graph. MEM_ROOT and Hash_set
are protected with a mutex, but walking doesn't need them
* up to 8 threads can concurrently load nodes into the cache,
nodes are partitioned into 8 mutexes (8 is chosen arbitrarily, might
need tuning)
* concurrent editing is not supported though
* this is fine for MyISAM, TL_WRITE protects the TABLE_SHARE and the
graph (note that TL_WRITE_CONCURRENT_INSERT is not allowed, because an
INSERT into the main table means multiple UPDATEs in the graph)
* InnoDB uses secondary transaction-level caches linked in a list in
in thd->ha_data via a fake handlerton
* on rollback the secondary cache is discarded, on commit nodes
from the secondary cache are invalidated in the shared cache
while it is exclusively locked
* on savepoint rollback both caches are flushed. this can be improved
in the future with a row visibility callback
* graph size is controlled by @@mhnsw_cache_size, the cache is flushed
when it reaches the threshold
2024-07-17 17:16:28 +02:00
|
|
|
TABLE *graph= table->hlindex;
|
|
|
|
THD *thd= table->in_use;
|
|
|
|
|
|
|
|
if (table->file->has_transactions() &&
|
|
|
|
(for_update || thd_get_ha_data(thd, &MHNSW_Trx::hton)))
|
|
|
|
*ctx= MHNSW_Trx::get_from_thd(thd, table);
|
|
|
|
else
|
|
|
|
{
|
|
|
|
*ctx= MHNSW_Context::get_from_share(table->s, table);
|
|
|
|
if (table->file->has_transactions())
|
|
|
|
mysql_rwlock_rdlock(&(*ctx)->commit_lock);
|
|
|
|
}
|
|
|
|
|
|
|
|
if ((*ctx)->start)
|
|
|
|
return 0;
|
|
|
|
|
|
|
|
if (int err= graph->file->ha_index_init(IDX_LAYER, 1))
|
|
|
|
return err;
|
|
|
|
|
|
|
|
int err= graph->file->ha_index_last(graph->record[0]);
|
|
|
|
graph->file->ha_index_end();
|
|
|
|
if (err)
|
|
|
|
return err;
|
|
|
|
|
|
|
|
graph->file->position(graph->record[0]);
|
2024-07-19 12:25:25 +02:00
|
|
|
(*ctx)->set_lengths(FVector::data_to_value_size(graph->field[FIELD_VEC]->value_length()));
|
mhnsw: inter-statement shared cache
* preserve the graph in memory between statements
* keep it in a TABLE_SHARE, available for concurrent searches
* nodes are generally read-only, walking the graph doesn't change them
* distance to target is cached, calculated only once
* SIMD-optimized bloom filter detects visited nodes
* nodes are stored in an array, not List, to better utilize bloom filter
* auto-adjusting heuristic to estimate the number of visited nodes
(to configure the bloom filter)
* many threads can concurrently walk the graph. MEM_ROOT and Hash_set
are protected with a mutex, but walking doesn't need them
* up to 8 threads can concurrently load nodes into the cache,
nodes are partitioned into 8 mutexes (8 is chosen arbitrarily, might
need tuning)
* concurrent editing is not supported though
* this is fine for MyISAM, TL_WRITE protects the TABLE_SHARE and the
graph (note that TL_WRITE_CONCURRENT_INSERT is not allowed, because an
INSERT into the main table means multiple UPDATEs in the graph)
* InnoDB uses secondary transaction-level caches linked in a list in
in thd->ha_data via a fake handlerton
* on rollback the secondary cache is discarded, on commit nodes
from the secondary cache are invalidated in the shared cache
while it is exclusively locked
* on savepoint rollback both caches are flushed. this can be improved
in the future with a row visibility callback
* graph size is controlled by @@mhnsw_cache_size, the cache is flushed
when it reaches the threshold
2024-07-17 17:16:28 +02:00
|
|
|
(*ctx)->start= (*ctx)->get_node(graph->file->ref);
|
|
|
|
return (*ctx)->start->load_from_record(graph);
|
|
|
|
}
|
|
|
|
|
2024-07-19 12:25:25 +02:00
|
|
|
/* copy the vector, preprocessed as needed */
|
|
|
|
const FVector *FVectorNode::make_vec(const void *v)
|
mhnsw: inter-statement shared cache
* preserve the graph in memory between statements
* keep it in a TABLE_SHARE, available for concurrent searches
* nodes are generally read-only, walking the graph doesn't change them
* distance to target is cached, calculated only once
* SIMD-optimized bloom filter detects visited nodes
* nodes are stored in an array, not List, to better utilize bloom filter
* auto-adjusting heuristic to estimate the number of visited nodes
(to configure the bloom filter)
* many threads can concurrently walk the graph. MEM_ROOT and Hash_set
are protected with a mutex, but walking doesn't need them
* up to 8 threads can concurrently load nodes into the cache,
nodes are partitioned into 8 mutexes (8 is chosen arbitrarily, might
need tuning)
* concurrent editing is not supported though
* this is fine for MyISAM, TL_WRITE protects the TABLE_SHARE and the
graph (note that TL_WRITE_CONCURRENT_INSERT is not allowed, because an
INSERT into the main table means multiple UPDATEs in the graph)
* InnoDB uses secondary transaction-level caches linked in a list in
in thd->ha_data via a fake handlerton
* on rollback the secondary cache is discarded, on commit nodes
from the secondary cache are invalidated in the shared cache
while it is exclusively locked
* on savepoint rollback both caches are flushed. this can be improved
in the future with a row visibility callback
* graph size is controlled by @@mhnsw_cache_size, the cache is flushed
when it reaches the threshold
2024-07-17 17:16:28 +02:00
|
|
|
{
|
2024-07-19 12:25:25 +02:00
|
|
|
return FVector::create(tref() + tref_len(), v, ctx->byte_len);
|
2024-06-04 14:47:52 +02:00
|
|
|
}
|
Initial HNSW implementation
This commit includes the work done in collaboration with Hugo Wen from
Amazon:
MDEV-33408 Alter HNSW graph storage and fix memory leak
This commit changes the way HNSW graph information is stored in the
second table. Instead of storing connections as separate records, it now
stores neighbors for each node, leading to significant performance
improvements and storage savings.
Comparing with the previous approach, the insert speed is 5 times faster,
search speed improves by 23%, and storage usage is reduced by 73%, based
on ann-benchmark tests with random-xs-20-euclidean and
random-s-100-euclidean datasets.
Additionally, in previous code, vector objects were not released after
use, resulting in excessive memory consumption (over 20GB for building
the index with 90,000 records), preventing tests with large datasets.
Now ensure that vectors are released appropriately during the insert and
search functions. Note there are still some vectors that need to be
cleaned up after search query completion. Needs to be addressed in a
future commit.
All new code of the whole pull request, including one or several files
that are either new files or modified ones, are contributed under the
BSD-new license. I am contributing on behalf of my employer Amazon Web
Services, Inc.
As well as the commit:
Introduce session variables to manage HNSW index parameters
Three variables:
hnsw_max_connection_per_layer
hnsw_ef_constructor
hnsw_ef_search
ann-benchmark tool is also updated to support these variables in commit
https://github.com/HugoWenTD/ann-benchmarks/commit/e09784e for branch
https://github.com/HugoWenTD/ann-benchmarks/tree/mariadb-configurable
All new code of the whole pull request, including one or several files
that are either new files or modified ones, are contributed under the
BSD-new license. I am contributing on behalf of my employer Amazon Web
Services, Inc.
Co-authored-by: Hugo Wen <wenhug@amazon.com>
2024-02-17 17:03:30 +02:00
|
|
|
|
2024-06-13 23:24:51 +02:00
|
|
|
FVectorNode::FVectorNode(MHNSW_Context *ctx_, const void *gref_)
|
2024-07-19 12:25:25 +02:00
|
|
|
: ctx(ctx_), stored(true), deleted(false)
|
2024-06-04 14:47:52 +02:00
|
|
|
{
|
mhnsw: inter-statement shared cache
* preserve the graph in memory between statements
* keep it in a TABLE_SHARE, available for concurrent searches
* nodes are generally read-only, walking the graph doesn't change them
* distance to target is cached, calculated only once
* SIMD-optimized bloom filter detects visited nodes
* nodes are stored in an array, not List, to better utilize bloom filter
* auto-adjusting heuristic to estimate the number of visited nodes
(to configure the bloom filter)
* many threads can concurrently walk the graph. MEM_ROOT and Hash_set
are protected with a mutex, but walking doesn't need them
* up to 8 threads can concurrently load nodes into the cache,
nodes are partitioned into 8 mutexes (8 is chosen arbitrarily, might
need tuning)
* concurrent editing is not supported though
* this is fine for MyISAM, TL_WRITE protects the TABLE_SHARE and the
graph (note that TL_WRITE_CONCURRENT_INSERT is not allowed, because an
INSERT into the main table means multiple UPDATEs in the graph)
* InnoDB uses secondary transaction-level caches linked in a list in
in thd->ha_data via a fake handlerton
* on rollback the secondary cache is discarded, on commit nodes
from the secondary cache are invalidated in the shared cache
while it is exclusively locked
* on savepoint rollback both caches are flushed. this can be improved
in the future with a row visibility callback
* graph size is controlled by @@mhnsw_cache_size, the cache is flushed
when it reaches the threshold
2024-07-17 17:16:28 +02:00
|
|
|
memcpy(gref(), gref_, gref_len());
|
2024-06-04 14:47:52 +02:00
|
|
|
}
|
Initial HNSW implementation
This commit includes the work done in collaboration with Hugo Wen from
Amazon:
MDEV-33408 Alter HNSW graph storage and fix memory leak
This commit changes the way HNSW graph information is stored in the
second table. Instead of storing connections as separate records, it now
stores neighbors for each node, leading to significant performance
improvements and storage savings.
Comparing with the previous approach, the insert speed is 5 times faster,
search speed improves by 23%, and storage usage is reduced by 73%, based
on ann-benchmark tests with random-xs-20-euclidean and
random-s-100-euclidean datasets.
Additionally, in previous code, vector objects were not released after
use, resulting in excessive memory consumption (over 20GB for building
the index with 90,000 records), preventing tests with large datasets.
Now ensure that vectors are released appropriately during the insert and
search functions. Note there are still some vectors that need to be
cleaned up after search query completion. Needs to be addressed in a
future commit.
All new code of the whole pull request, including one or several files
that are either new files or modified ones, are contributed under the
BSD-new license. I am contributing on behalf of my employer Amazon Web
Services, Inc.
As well as the commit:
Introduce session variables to manage HNSW index parameters
Three variables:
hnsw_max_connection_per_layer
hnsw_ef_constructor
hnsw_ef_search
ann-benchmark tool is also updated to support these variables in commit
https://github.com/HugoWenTD/ann-benchmarks/commit/e09784e for branch
https://github.com/HugoWenTD/ann-benchmarks/tree/mariadb-configurable
All new code of the whole pull request, including one or several files
that are either new files or modified ones, are contributed under the
BSD-new license. I am contributing on behalf of my employer Amazon Web
Services, Inc.
Co-authored-by: Hugo Wen <wenhug@amazon.com>
2024-02-17 17:03:30 +02:00
|
|
|
|
2024-06-13 23:24:51 +02:00
|
|
|
FVectorNode::FVectorNode(MHNSW_Context *ctx_, const void *tref_, uint8_t layer,
|
|
|
|
const void *vec_)
|
2024-07-19 12:25:25 +02:00
|
|
|
: ctx(ctx_), stored(false), deleted(false)
|
2024-06-04 14:47:52 +02:00
|
|
|
{
|
2024-07-16 15:15:17 +02:00
|
|
|
DBUG_ASSERT(tref_);
|
mhnsw: inter-statement shared cache
* preserve the graph in memory between statements
* keep it in a TABLE_SHARE, available for concurrent searches
* nodes are generally read-only, walking the graph doesn't change them
* distance to target is cached, calculated only once
* SIMD-optimized bloom filter detects visited nodes
* nodes are stored in an array, not List, to better utilize bloom filter
* auto-adjusting heuristic to estimate the number of visited nodes
(to configure the bloom filter)
* many threads can concurrently walk the graph. MEM_ROOT and Hash_set
are protected with a mutex, but walking doesn't need them
* up to 8 threads can concurrently load nodes into the cache,
nodes are partitioned into 8 mutexes (8 is chosen arbitrarily, might
need tuning)
* concurrent editing is not supported though
* this is fine for MyISAM, TL_WRITE protects the TABLE_SHARE and the
graph (note that TL_WRITE_CONCURRENT_INSERT is not allowed, because an
INSERT into the main table means multiple UPDATEs in the graph)
* InnoDB uses secondary transaction-level caches linked in a list in
in thd->ha_data via a fake handlerton
* on rollback the secondary cache is discarded, on commit nodes
from the secondary cache are invalidated in the shared cache
while it is exclusively locked
* on savepoint rollback both caches are flushed. this can be improved
in the future with a row visibility callback
* graph size is controlled by @@mhnsw_cache_size, the cache is flushed
when it reaches the threshold
2024-07-17 17:16:28 +02:00
|
|
|
memset(gref(), 0xff, gref_len()); // important: larger than any real gref
|
|
|
|
memcpy(tref(), tref_, tref_len());
|
|
|
|
vec= make_vec(vec_);
|
|
|
|
|
2024-06-13 23:24:51 +02:00
|
|
|
alloc_neighborhood(layer);
|
2024-06-04 14:47:52 +02:00
|
|
|
}
|
Initial HNSW implementation
This commit includes the work done in collaboration with Hugo Wen from
Amazon:
MDEV-33408 Alter HNSW graph storage and fix memory leak
This commit changes the way HNSW graph information is stored in the
second table. Instead of storing connections as separate records, it now
stores neighbors for each node, leading to significant performance
improvements and storage savings.
Comparing with the previous approach, the insert speed is 5 times faster,
search speed improves by 23%, and storage usage is reduced by 73%, based
on ann-benchmark tests with random-xs-20-euclidean and
random-s-100-euclidean datasets.
Additionally, in previous code, vector objects were not released after
use, resulting in excessive memory consumption (over 20GB for building
the index with 90,000 records), preventing tests with large datasets.
Now ensure that vectors are released appropriately during the insert and
search functions. Note there are still some vectors that need to be
cleaned up after search query completion. Needs to be addressed in a
future commit.
All new code of the whole pull request, including one or several files
that are either new files or modified ones, are contributed under the
BSD-new license. I am contributing on behalf of my employer Amazon Web
Services, Inc.
As well as the commit:
Introduce session variables to manage HNSW index parameters
Three variables:
hnsw_max_connection_per_layer
hnsw_ef_constructor
hnsw_ef_search
ann-benchmark tool is also updated to support these variables in commit
https://github.com/HugoWenTD/ann-benchmarks/commit/e09784e for branch
https://github.com/HugoWenTD/ann-benchmarks/tree/mariadb-configurable
All new code of the whole pull request, including one or several files
that are either new files or modified ones, are contributed under the
BSD-new license. I am contributing on behalf of my employer Amazon Web
Services, Inc.
Co-authored-by: Hugo Wen <wenhug@amazon.com>
2024-02-17 17:03:30 +02:00
|
|
|
|
2024-07-19 12:25:25 +02:00
|
|
|
float FVectorNode::distance_to(const FVector *other) const
|
2024-06-04 14:47:52 +02:00
|
|
|
{
|
2024-07-19 12:25:25 +02:00
|
|
|
return vec->distance_to(other, ctx->vec_len);
|
2024-06-04 14:47:52 +02:00
|
|
|
}
|
Initial HNSW implementation
This commit includes the work done in collaboration with Hugo Wen from
Amazon:
MDEV-33408 Alter HNSW graph storage and fix memory leak
This commit changes the way HNSW graph information is stored in the
second table. Instead of storing connections as separate records, it now
stores neighbors for each node, leading to significant performance
improvements and storage savings.
Comparing with the previous approach, the insert speed is 5 times faster,
search speed improves by 23%, and storage usage is reduced by 73%, based
on ann-benchmark tests with random-xs-20-euclidean and
random-s-100-euclidean datasets.
Additionally, in previous code, vector objects were not released after
use, resulting in excessive memory consumption (over 20GB for building
the index with 90,000 records), preventing tests with large datasets.
Now ensure that vectors are released appropriately during the insert and
search functions. Note there are still some vectors that need to be
cleaned up after search query completion. Needs to be addressed in a
future commit.
All new code of the whole pull request, including one or several files
that are either new files or modified ones, are contributed under the
BSD-new license. I am contributing on behalf of my employer Amazon Web
Services, Inc.
As well as the commit:
Introduce session variables to manage HNSW index parameters
Three variables:
hnsw_max_connection_per_layer
hnsw_ef_constructor
hnsw_ef_search
ann-benchmark tool is also updated to support these variables in commit
https://github.com/HugoWenTD/ann-benchmarks/commit/e09784e for branch
https://github.com/HugoWenTD/ann-benchmarks/tree/mariadb-configurable
All new code of the whole pull request, including one or several files
that are either new files or modified ones, are contributed under the
BSD-new license. I am contributing on behalf of my employer Amazon Web
Services, Inc.
Co-authored-by: Hugo Wen <wenhug@amazon.com>
2024-02-17 17:03:30 +02:00
|
|
|
|
2024-06-13 23:24:51 +02:00
|
|
|
int FVectorNode::alloc_neighborhood(uint8_t layer)
|
2024-06-04 14:47:52 +02:00
|
|
|
{
|
mhnsw: inter-statement shared cache
* preserve the graph in memory between statements
* keep it in a TABLE_SHARE, available for concurrent searches
* nodes are generally read-only, walking the graph doesn't change them
* distance to target is cached, calculated only once
* SIMD-optimized bloom filter detects visited nodes
* nodes are stored in an array, not List, to better utilize bloom filter
* auto-adjusting heuristic to estimate the number of visited nodes
(to configure the bloom filter)
* many threads can concurrently walk the graph. MEM_ROOT and Hash_set
are protected with a mutex, but walking doesn't need them
* up to 8 threads can concurrently load nodes into the cache,
nodes are partitioned into 8 mutexes (8 is chosen arbitrarily, might
need tuning)
* concurrent editing is not supported though
* this is fine for MyISAM, TL_WRITE protects the TABLE_SHARE and the
graph (note that TL_WRITE_CONCURRENT_INSERT is not allowed, because an
INSERT into the main table means multiple UPDATEs in the graph)
* InnoDB uses secondary transaction-level caches linked in a list in
in thd->ha_data via a fake handlerton
* on rollback the secondary cache is discarded, on commit nodes
from the secondary cache are invalidated in the shared cache
while it is exclusively locked
* on savepoint rollback both caches are flushed. this can be improved
in the future with a row visibility callback
* graph size is controlled by @@mhnsw_cache_size, the cache is flushed
when it reaches the threshold
2024-07-17 17:16:28 +02:00
|
|
|
if (neighbors)
|
|
|
|
return 0;
|
2024-06-13 23:24:51 +02:00
|
|
|
max_layer= layer;
|
mhnsw: inter-statement shared cache
* preserve the graph in memory between statements
* keep it in a TABLE_SHARE, available for concurrent searches
* nodes are generally read-only, walking the graph doesn't change them
* distance to target is cached, calculated only once
* SIMD-optimized bloom filter detects visited nodes
* nodes are stored in an array, not List, to better utilize bloom filter
* auto-adjusting heuristic to estimate the number of visited nodes
(to configure the bloom filter)
* many threads can concurrently walk the graph. MEM_ROOT and Hash_set
are protected with a mutex, but walking doesn't need them
* up to 8 threads can concurrently load nodes into the cache,
nodes are partitioned into 8 mutexes (8 is chosen arbitrarily, might
need tuning)
* concurrent editing is not supported though
* this is fine for MyISAM, TL_WRITE protects the TABLE_SHARE and the
graph (note that TL_WRITE_CONCURRENT_INSERT is not allowed, because an
INSERT into the main table means multiple UPDATEs in the graph)
* InnoDB uses secondary transaction-level caches linked in a list in
in thd->ha_data via a fake handlerton
* on rollback the secondary cache is discarded, on commit nodes
from the secondary cache are invalidated in the shared cache
while it is exclusively locked
* on savepoint rollback both caches are flushed. this can be improved
in the future with a row visibility callback
* graph size is controlled by @@mhnsw_cache_size, the cache is flushed
when it reaches the threshold
2024-07-17 17:16:28 +02:00
|
|
|
neighbors= (Neighborhood*)ctx->alloc_neighborhood(layer);
|
|
|
|
auto ptr= (FVectorNode**)(neighbors + (layer+1));
|
|
|
|
for (size_t i= 0; i <= layer; i++)
|
|
|
|
ptr= neighbors[i].init(ptr, ctx->max_neighbors(i));
|
2024-06-04 14:47:52 +02:00
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
mhnsw: inter-statement shared cache
* preserve the graph in memory between statements
* keep it in a TABLE_SHARE, available for concurrent searches
* nodes are generally read-only, walking the graph doesn't change them
* distance to target is cached, calculated only once
* SIMD-optimized bloom filter detects visited nodes
* nodes are stored in an array, not List, to better utilize bloom filter
* auto-adjusting heuristic to estimate the number of visited nodes
(to configure the bloom filter)
* many threads can concurrently walk the graph. MEM_ROOT and Hash_set
are protected with a mutex, but walking doesn't need them
* up to 8 threads can concurrently load nodes into the cache,
nodes are partitioned into 8 mutexes (8 is chosen arbitrarily, might
need tuning)
* concurrent editing is not supported though
* this is fine for MyISAM, TL_WRITE protects the TABLE_SHARE and the
graph (note that TL_WRITE_CONCURRENT_INSERT is not allowed, because an
INSERT into the main table means multiple UPDATEs in the graph)
* InnoDB uses secondary transaction-level caches linked in a list in
in thd->ha_data via a fake handlerton
* on rollback the secondary cache is discarded, on commit nodes
from the secondary cache are invalidated in the shared cache
while it is exclusively locked
* on savepoint rollback both caches are flushed. this can be improved
in the future with a row visibility callback
* graph size is controlled by @@mhnsw_cache_size, the cache is flushed
when it reaches the threshold
2024-07-17 17:16:28 +02:00
|
|
|
int FVectorNode::load(TABLE *graph)
|
2024-06-04 23:06:44 +02:00
|
|
|
{
|
mhnsw: inter-statement shared cache
* preserve the graph in memory between statements
* keep it in a TABLE_SHARE, available for concurrent searches
* nodes are generally read-only, walking the graph doesn't change them
* distance to target is cached, calculated only once
* SIMD-optimized bloom filter detects visited nodes
* nodes are stored in an array, not List, to better utilize bloom filter
* auto-adjusting heuristic to estimate the number of visited nodes
(to configure the bloom filter)
* many threads can concurrently walk the graph. MEM_ROOT and Hash_set
are protected with a mutex, but walking doesn't need them
* up to 8 threads can concurrently load nodes into the cache,
nodes are partitioned into 8 mutexes (8 is chosen arbitrarily, might
need tuning)
* concurrent editing is not supported though
* this is fine for MyISAM, TL_WRITE protects the TABLE_SHARE and the
graph (note that TL_WRITE_CONCURRENT_INSERT is not allowed, because an
INSERT into the main table means multiple UPDATEs in the graph)
* InnoDB uses secondary transaction-level caches linked in a list in
in thd->ha_data via a fake handlerton
* on rollback the secondary cache is discarded, on commit nodes
from the secondary cache are invalidated in the shared cache
while it is exclusively locked
* on savepoint rollback both caches are flushed. this can be improved
in the future with a row visibility callback
* graph size is controlled by @@mhnsw_cache_size, the cache is flushed
when it reaches the threshold
2024-07-17 17:16:28 +02:00
|
|
|
if (likely(vec))
|
2024-06-13 23:24:51 +02:00
|
|
|
return 0;
|
2024-06-04 23:06:44 +02:00
|
|
|
|
mhnsw: inter-statement shared cache
* preserve the graph in memory between statements
* keep it in a TABLE_SHARE, available for concurrent searches
* nodes are generally read-only, walking the graph doesn't change them
* distance to target is cached, calculated only once
* SIMD-optimized bloom filter detects visited nodes
* nodes are stored in an array, not List, to better utilize bloom filter
* auto-adjusting heuristic to estimate the number of visited nodes
(to configure the bloom filter)
* many threads can concurrently walk the graph. MEM_ROOT and Hash_set
are protected with a mutex, but walking doesn't need them
* up to 8 threads can concurrently load nodes into the cache,
nodes are partitioned into 8 mutexes (8 is chosen arbitrarily, might
need tuning)
* concurrent editing is not supported though
* this is fine for MyISAM, TL_WRITE protects the TABLE_SHARE and the
graph (note that TL_WRITE_CONCURRENT_INSERT is not allowed, because an
INSERT into the main table means multiple UPDATEs in the graph)
* InnoDB uses secondary transaction-level caches linked in a list in
in thd->ha_data via a fake handlerton
* on rollback the secondary cache is discarded, on commit nodes
from the secondary cache are invalidated in the shared cache
while it is exclusively locked
* on savepoint rollback both caches are flushed. this can be improved
in the future with a row visibility callback
* graph size is controlled by @@mhnsw_cache_size, the cache is flushed
when it reaches the threshold
2024-07-17 17:16:28 +02:00
|
|
|
DBUG_ASSERT(stored);
|
|
|
|
// trx: consider loading nodes from shared, when it makes sense
|
|
|
|
// for ann_benchmarks it does not
|
|
|
|
if (int err= graph->file->ha_rnd_pos(graph->record[0], gref()))
|
|
|
|
return err;
|
|
|
|
return load_from_record(graph);
|
2024-06-04 23:06:44 +02:00
|
|
|
}
|
|
|
|
|
mhnsw: inter-statement shared cache
* preserve the graph in memory between statements
* keep it in a TABLE_SHARE, available for concurrent searches
* nodes are generally read-only, walking the graph doesn't change them
* distance to target is cached, calculated only once
* SIMD-optimized bloom filter detects visited nodes
* nodes are stored in an array, not List, to better utilize bloom filter
* auto-adjusting heuristic to estimate the number of visited nodes
(to configure the bloom filter)
* many threads can concurrently walk the graph. MEM_ROOT and Hash_set
are protected with a mutex, but walking doesn't need them
* up to 8 threads can concurrently load nodes into the cache,
nodes are partitioned into 8 mutexes (8 is chosen arbitrarily, might
need tuning)
* concurrent editing is not supported though
* this is fine for MyISAM, TL_WRITE protects the TABLE_SHARE and the
graph (note that TL_WRITE_CONCURRENT_INSERT is not allowed, because an
INSERT into the main table means multiple UPDATEs in the graph)
* InnoDB uses secondary transaction-level caches linked in a list in
in thd->ha_data via a fake handlerton
* on rollback the secondary cache is discarded, on commit nodes
from the secondary cache are invalidated in the shared cache
while it is exclusively locked
* on savepoint rollback both caches are flushed. this can be improved
in the future with a row visibility callback
* graph size is controlled by @@mhnsw_cache_size, the cache is flushed
when it reaches the threshold
2024-07-17 17:16:28 +02:00
|
|
|
int FVectorNode::load_from_record(TABLE *graph)
|
2024-06-04 23:06:44 +02:00
|
|
|
{
|
mhnsw: inter-statement shared cache
* preserve the graph in memory between statements
* keep it in a TABLE_SHARE, available for concurrent searches
* nodes are generally read-only, walking the graph doesn't change them
* distance to target is cached, calculated only once
* SIMD-optimized bloom filter detects visited nodes
* nodes are stored in an array, not List, to better utilize bloom filter
* auto-adjusting heuristic to estimate the number of visited nodes
(to configure the bloom filter)
* many threads can concurrently walk the graph. MEM_ROOT and Hash_set
are protected with a mutex, but walking doesn't need them
* up to 8 threads can concurrently load nodes into the cache,
nodes are partitioned into 8 mutexes (8 is chosen arbitrarily, might
need tuning)
* concurrent editing is not supported though
* this is fine for MyISAM, TL_WRITE protects the TABLE_SHARE and the
graph (note that TL_WRITE_CONCURRENT_INSERT is not allowed, because an
INSERT into the main table means multiple UPDATEs in the graph)
* InnoDB uses secondary transaction-level caches linked in a list in
in thd->ha_data via a fake handlerton
* on rollback the secondary cache is discarded, on commit nodes
from the secondary cache are invalidated in the shared cache
while it is exclusively locked
* on savepoint rollback both caches are flushed. this can be improved
in the future with a row visibility callback
* graph size is controlled by @@mhnsw_cache_size, the cache is flushed
when it reaches the threshold
2024-07-17 17:16:28 +02:00
|
|
|
DBUG_ASSERT(ctx->byte_len);
|
|
|
|
|
|
|
|
uint ticket= ctx->lock_node(this);
|
|
|
|
SCOPE_EXIT([this, ticket](){ ctx->unlock_node(ticket); });
|
|
|
|
|
|
|
|
if (vec)
|
|
|
|
return 0;
|
|
|
|
|
2024-06-13 23:24:51 +02:00
|
|
|
String buf, *v= graph->field[FIELD_TREF]->val_str(&buf);
|
2024-07-16 15:15:17 +02:00
|
|
|
deleted= graph->field[FIELD_TREF]->is_null();
|
|
|
|
if (!deleted)
|
|
|
|
{
|
|
|
|
if (unlikely(v->length() != tref_len()))
|
|
|
|
return my_errno= HA_ERR_CRASHED;
|
|
|
|
memcpy(tref(), v->ptr(), v->length());
|
|
|
|
}
|
2024-06-13 23:24:51 +02:00
|
|
|
|
|
|
|
v= graph->field[FIELD_VEC]->val_str(&buf);
|
|
|
|
if (unlikely(!v))
|
mhnsw: inter-statement shared cache
* preserve the graph in memory between statements
* keep it in a TABLE_SHARE, available for concurrent searches
* nodes are generally read-only, walking the graph doesn't change them
* distance to target is cached, calculated only once
* SIMD-optimized bloom filter detects visited nodes
* nodes are stored in an array, not List, to better utilize bloom filter
* auto-adjusting heuristic to estimate the number of visited nodes
(to configure the bloom filter)
* many threads can concurrently walk the graph. MEM_ROOT and Hash_set
are protected with a mutex, but walking doesn't need them
* up to 8 threads can concurrently load nodes into the cache,
nodes are partitioned into 8 mutexes (8 is chosen arbitrarily, might
need tuning)
* concurrent editing is not supported though
* this is fine for MyISAM, TL_WRITE protects the TABLE_SHARE and the
graph (note that TL_WRITE_CONCURRENT_INSERT is not allowed, because an
INSERT into the main table means multiple UPDATEs in the graph)
* InnoDB uses secondary transaction-level caches linked in a list in
in thd->ha_data via a fake handlerton
* on rollback the secondary cache is discarded, on commit nodes
from the secondary cache are invalidated in the shared cache
while it is exclusively locked
* on savepoint rollback both caches are flushed. this can be improved
in the future with a row visibility callback
* graph size is controlled by @@mhnsw_cache_size, the cache is flushed
when it reaches the threshold
2024-07-17 17:16:28 +02:00
|
|
|
return my_errno= HA_ERR_CRASHED;
|
2024-06-13 23:24:51 +02:00
|
|
|
|
2024-07-19 12:25:25 +02:00
|
|
|
if (v->length() != FVector::data_size(ctx->vec_len))
|
mhnsw: inter-statement shared cache
* preserve the graph in memory between statements
* keep it in a TABLE_SHARE, available for concurrent searches
* nodes are generally read-only, walking the graph doesn't change them
* distance to target is cached, calculated only once
* SIMD-optimized bloom filter detects visited nodes
* nodes are stored in an array, not List, to better utilize bloom filter
* auto-adjusting heuristic to estimate the number of visited nodes
(to configure the bloom filter)
* many threads can concurrently walk the graph. MEM_ROOT and Hash_set
are protected with a mutex, but walking doesn't need them
* up to 8 threads can concurrently load nodes into the cache,
nodes are partitioned into 8 mutexes (8 is chosen arbitrarily, might
need tuning)
* concurrent editing is not supported though
* this is fine for MyISAM, TL_WRITE protects the TABLE_SHARE and the
graph (note that TL_WRITE_CONCURRENT_INSERT is not allowed, because an
INSERT into the main table means multiple UPDATEs in the graph)
* InnoDB uses secondary transaction-level caches linked in a list in
in thd->ha_data via a fake handlerton
* on rollback the secondary cache is discarded, on commit nodes
from the secondary cache are invalidated in the shared cache
while it is exclusively locked
* on savepoint rollback both caches are flushed. this can be improved
in the future with a row visibility callback
* graph size is controlled by @@mhnsw_cache_size, the cache is flushed
when it reaches the threshold
2024-07-17 17:16:28 +02:00
|
|
|
return my_errno= HA_ERR_CRASHED;
|
2024-07-19 12:25:25 +02:00
|
|
|
FVector *vec_ptr= FVector::align_ptr(tref() + tref_len());
|
|
|
|
memcpy(vec_ptr->data(), v->ptr(), v->length());
|
|
|
|
vec_ptr->postprocess(ctx->vec_len);
|
2024-06-13 23:24:51 +02:00
|
|
|
|
|
|
|
longlong layer= graph->field[FIELD_LAYER]->val_int();
|
|
|
|
if (layer > 100) // 10e30 nodes at M=2, more at larger M's
|
mhnsw: inter-statement shared cache
* preserve the graph in memory between statements
* keep it in a TABLE_SHARE, available for concurrent searches
* nodes are generally read-only, walking the graph doesn't change them
* distance to target is cached, calculated only once
* SIMD-optimized bloom filter detects visited nodes
* nodes are stored in an array, not List, to better utilize bloom filter
* auto-adjusting heuristic to estimate the number of visited nodes
(to configure the bloom filter)
* many threads can concurrently walk the graph. MEM_ROOT and Hash_set
are protected with a mutex, but walking doesn't need them
* up to 8 threads can concurrently load nodes into the cache,
nodes are partitioned into 8 mutexes (8 is chosen arbitrarily, might
need tuning)
* concurrent editing is not supported though
* this is fine for MyISAM, TL_WRITE protects the TABLE_SHARE and the
graph (note that TL_WRITE_CONCURRENT_INSERT is not allowed, because an
INSERT into the main table means multiple UPDATEs in the graph)
* InnoDB uses secondary transaction-level caches linked in a list in
in thd->ha_data via a fake handlerton
* on rollback the secondary cache is discarded, on commit nodes
from the secondary cache are invalidated in the shared cache
while it is exclusively locked
* on savepoint rollback both caches are flushed. this can be improved
in the future with a row visibility callback
* graph size is controlled by @@mhnsw_cache_size, the cache is flushed
when it reaches the threshold
2024-07-17 17:16:28 +02:00
|
|
|
return my_errno= HA_ERR_CRASHED;
|
2024-06-13 23:24:51 +02:00
|
|
|
|
mhnsw: inter-statement shared cache
* preserve the graph in memory between statements
* keep it in a TABLE_SHARE, available for concurrent searches
* nodes are generally read-only, walking the graph doesn't change them
* distance to target is cached, calculated only once
* SIMD-optimized bloom filter detects visited nodes
* nodes are stored in an array, not List, to better utilize bloom filter
* auto-adjusting heuristic to estimate the number of visited nodes
(to configure the bloom filter)
* many threads can concurrently walk the graph. MEM_ROOT and Hash_set
are protected with a mutex, but walking doesn't need them
* up to 8 threads can concurrently load nodes into the cache,
nodes are partitioned into 8 mutexes (8 is chosen arbitrarily, might
need tuning)
* concurrent editing is not supported though
* this is fine for MyISAM, TL_WRITE protects the TABLE_SHARE and the
graph (note that TL_WRITE_CONCURRENT_INSERT is not allowed, because an
INSERT into the main table means multiple UPDATEs in the graph)
* InnoDB uses secondary transaction-level caches linked in a list in
in thd->ha_data via a fake handlerton
* on rollback the secondary cache is discarded, on commit nodes
from the secondary cache are invalidated in the shared cache
while it is exclusively locked
* on savepoint rollback both caches are flushed. this can be improved
in the future with a row visibility callback
* graph size is controlled by @@mhnsw_cache_size, the cache is flushed
when it reaches the threshold
2024-07-17 17:16:28 +02:00
|
|
|
if (int err= alloc_neighborhood(static_cast<uint8_t>(layer)))
|
|
|
|
return err;
|
2024-06-13 23:24:51 +02:00
|
|
|
|
|
|
|
v= graph->field[FIELD_NEIGHBORS]->val_str(&buf);
|
|
|
|
if (unlikely(!v))
|
mhnsw: inter-statement shared cache
* preserve the graph in memory between statements
* keep it in a TABLE_SHARE, available for concurrent searches
* nodes are generally read-only, walking the graph doesn't change them
* distance to target is cached, calculated only once
* SIMD-optimized bloom filter detects visited nodes
* nodes are stored in an array, not List, to better utilize bloom filter
* auto-adjusting heuristic to estimate the number of visited nodes
(to configure the bloom filter)
* many threads can concurrently walk the graph. MEM_ROOT and Hash_set
are protected with a mutex, but walking doesn't need them
* up to 8 threads can concurrently load nodes into the cache,
nodes are partitioned into 8 mutexes (8 is chosen arbitrarily, might
need tuning)
* concurrent editing is not supported though
* this is fine for MyISAM, TL_WRITE protects the TABLE_SHARE and the
graph (note that TL_WRITE_CONCURRENT_INSERT is not allowed, because an
INSERT into the main table means multiple UPDATEs in the graph)
* InnoDB uses secondary transaction-level caches linked in a list in
in thd->ha_data via a fake handlerton
* on rollback the secondary cache is discarded, on commit nodes
from the secondary cache are invalidated in the shared cache
while it is exclusively locked
* on savepoint rollback both caches are flushed. this can be improved
in the future with a row visibility callback
* graph size is controlled by @@mhnsw_cache_size, the cache is flushed
when it reaches the threshold
2024-07-17 17:16:28 +02:00
|
|
|
return my_errno= HA_ERR_CRASHED;
|
2024-06-13 23:24:51 +02:00
|
|
|
|
|
|
|
// <N> <gref> <gref> ... <N> ...etc...
|
|
|
|
uchar *ptr= (uchar*)v->ptr(), *end= ptr + v->length();
|
|
|
|
for (size_t i=0; i <= max_layer; i++)
|
|
|
|
{
|
|
|
|
if (unlikely(ptr >= end))
|
mhnsw: inter-statement shared cache
* preserve the graph in memory between statements
* keep it in a TABLE_SHARE, available for concurrent searches
* nodes are generally read-only, walking the graph doesn't change them
* distance to target is cached, calculated only once
* SIMD-optimized bloom filter detects visited nodes
* nodes are stored in an array, not List, to better utilize bloom filter
* auto-adjusting heuristic to estimate the number of visited nodes
(to configure the bloom filter)
* many threads can concurrently walk the graph. MEM_ROOT and Hash_set
are protected with a mutex, but walking doesn't need them
* up to 8 threads can concurrently load nodes into the cache,
nodes are partitioned into 8 mutexes (8 is chosen arbitrarily, might
need tuning)
* concurrent editing is not supported though
* this is fine for MyISAM, TL_WRITE protects the TABLE_SHARE and the
graph (note that TL_WRITE_CONCURRENT_INSERT is not allowed, because an
INSERT into the main table means multiple UPDATEs in the graph)
* InnoDB uses secondary transaction-level caches linked in a list in
in thd->ha_data via a fake handlerton
* on rollback the secondary cache is discarded, on commit nodes
from the secondary cache are invalidated in the shared cache
while it is exclusively locked
* on savepoint rollback both caches are flushed. this can be improved
in the future with a row visibility callback
* graph size is controlled by @@mhnsw_cache_size, the cache is flushed
when it reaches the threshold
2024-07-17 17:16:28 +02:00
|
|
|
return my_errno= HA_ERR_CRASHED;
|
2024-06-13 23:24:51 +02:00
|
|
|
size_t grefs= *ptr++;
|
mhnsw: inter-statement shared cache
* preserve the graph in memory between statements
* keep it in a TABLE_SHARE, available for concurrent searches
* nodes are generally read-only, walking the graph doesn't change them
* distance to target is cached, calculated only once
* SIMD-optimized bloom filter detects visited nodes
* nodes are stored in an array, not List, to better utilize bloom filter
* auto-adjusting heuristic to estimate the number of visited nodes
(to configure the bloom filter)
* many threads can concurrently walk the graph. MEM_ROOT and Hash_set
are protected with a mutex, but walking doesn't need them
* up to 8 threads can concurrently load nodes into the cache,
nodes are partitioned into 8 mutexes (8 is chosen arbitrarily, might
need tuning)
* concurrent editing is not supported though
* this is fine for MyISAM, TL_WRITE protects the TABLE_SHARE and the
graph (note that TL_WRITE_CONCURRENT_INSERT is not allowed, because an
INSERT into the main table means multiple UPDATEs in the graph)
* InnoDB uses secondary transaction-level caches linked in a list in
in thd->ha_data via a fake handlerton
* on rollback the secondary cache is discarded, on commit nodes
from the secondary cache are invalidated in the shared cache
while it is exclusively locked
* on savepoint rollback both caches are flushed. this can be improved
in the future with a row visibility callback
* graph size is controlled by @@mhnsw_cache_size, the cache is flushed
when it reaches the threshold
2024-07-17 17:16:28 +02:00
|
|
|
if (unlikely(ptr + grefs * gref_len() > end))
|
|
|
|
return my_errno= HA_ERR_CRASHED;
|
|
|
|
neighbors[i].num= grefs;
|
|
|
|
for (size_t j=0; j < grefs; j++, ptr+= gref_len())
|
|
|
|
neighbors[i].links[j]= ctx->get_node(ptr);
|
2024-06-13 23:24:51 +02:00
|
|
|
}
|
mhnsw: inter-statement shared cache
* preserve the graph in memory between statements
* keep it in a TABLE_SHARE, available for concurrent searches
* nodes are generally read-only, walking the graph doesn't change them
* distance to target is cached, calculated only once
* SIMD-optimized bloom filter detects visited nodes
* nodes are stored in an array, not List, to better utilize bloom filter
* auto-adjusting heuristic to estimate the number of visited nodes
(to configure the bloom filter)
* many threads can concurrently walk the graph. MEM_ROOT and Hash_set
are protected with a mutex, but walking doesn't need them
* up to 8 threads can concurrently load nodes into the cache,
nodes are partitioned into 8 mutexes (8 is chosen arbitrarily, might
need tuning)
* concurrent editing is not supported though
* this is fine for MyISAM, TL_WRITE protects the TABLE_SHARE and the
graph (note that TL_WRITE_CONCURRENT_INSERT is not allowed, because an
INSERT into the main table means multiple UPDATEs in the graph)
* InnoDB uses secondary transaction-level caches linked in a list in
in thd->ha_data via a fake handlerton
* on rollback the secondary cache is discarded, on commit nodes
from the secondary cache are invalidated in the shared cache
while it is exclusively locked
* on savepoint rollback both caches are flushed. this can be improved
in the future with a row visibility callback
* graph size is controlled by @@mhnsw_cache_size, the cache is flushed
when it reaches the threshold
2024-07-17 17:16:28 +02:00
|
|
|
vec= vec_ptr; // must be done at the very end
|
2024-06-13 23:24:51 +02:00
|
|
|
return 0;
|
2024-06-04 23:06:44 +02:00
|
|
|
}
|
|
|
|
|
mhnsw: inter-statement shared cache
* preserve the graph in memory between statements
* keep it in a TABLE_SHARE, available for concurrent searches
* nodes are generally read-only, walking the graph doesn't change them
* distance to target is cached, calculated only once
* SIMD-optimized bloom filter detects visited nodes
* nodes are stored in an array, not List, to better utilize bloom filter
* auto-adjusting heuristic to estimate the number of visited nodes
(to configure the bloom filter)
* many threads can concurrently walk the graph. MEM_ROOT and Hash_set
are protected with a mutex, but walking doesn't need them
* up to 8 threads can concurrently load nodes into the cache,
nodes are partitioned into 8 mutexes (8 is chosen arbitrarily, might
need tuning)
* concurrent editing is not supported though
* this is fine for MyISAM, TL_WRITE protects the TABLE_SHARE and the
graph (note that TL_WRITE_CONCURRENT_INSERT is not allowed, because an
INSERT into the main table means multiple UPDATEs in the graph)
* InnoDB uses secondary transaction-level caches linked in a list in
in thd->ha_data via a fake handlerton
* on rollback the secondary cache is discarded, on commit nodes
from the secondary cache are invalidated in the shared cache
while it is exclusively locked
* on savepoint rollback both caches are flushed. this can be improved
in the future with a row visibility callback
* graph size is controlled by @@mhnsw_cache_size, the cache is flushed
when it reaches the threshold
2024-07-17 17:16:28 +02:00
|
|
|
void FVectorNode::push_neighbor(size_t layer, FVectorNode *other)
|
2024-06-04 14:47:52 +02:00
|
|
|
{
|
mhnsw: inter-statement shared cache
* preserve the graph in memory between statements
* keep it in a TABLE_SHARE, available for concurrent searches
* nodes are generally read-only, walking the graph doesn't change them
* distance to target is cached, calculated only once
* SIMD-optimized bloom filter detects visited nodes
* nodes are stored in an array, not List, to better utilize bloom filter
* auto-adjusting heuristic to estimate the number of visited nodes
(to configure the bloom filter)
* many threads can concurrently walk the graph. MEM_ROOT and Hash_set
are protected with a mutex, but walking doesn't need them
* up to 8 threads can concurrently load nodes into the cache,
nodes are partitioned into 8 mutexes (8 is chosen arbitrarily, might
need tuning)
* concurrent editing is not supported though
* this is fine for MyISAM, TL_WRITE protects the TABLE_SHARE and the
graph (note that TL_WRITE_CONCURRENT_INSERT is not allowed, because an
INSERT into the main table means multiple UPDATEs in the graph)
* InnoDB uses secondary transaction-level caches linked in a list in
in thd->ha_data via a fake handlerton
* on rollback the secondary cache is discarded, on commit nodes
from the secondary cache are invalidated in the shared cache
while it is exclusively locked
* on savepoint rollback both caches are flushed. this can be improved
in the future with a row visibility callback
* graph size is controlled by @@mhnsw_cache_size, the cache is flushed
when it reaches the threshold
2024-07-17 17:16:28 +02:00
|
|
|
DBUG_ASSERT(neighbors[layer].num < ctx->max_neighbors(layer));
|
|
|
|
neighbors[layer].links[neighbors[layer].num++]= other;
|
2024-06-04 14:47:52 +02:00
|
|
|
}
|
Initial HNSW implementation
This commit includes the work done in collaboration with Hugo Wen from
Amazon:
MDEV-33408 Alter HNSW graph storage and fix memory leak
This commit changes the way HNSW graph information is stored in the
second table. Instead of storing connections as separate records, it now
stores neighbors for each node, leading to significant performance
improvements and storage savings.
Comparing with the previous approach, the insert speed is 5 times faster,
search speed improves by 23%, and storage usage is reduced by 73%, based
on ann-benchmark tests with random-xs-20-euclidean and
random-s-100-euclidean datasets.
Additionally, in previous code, vector objects were not released after
use, resulting in excessive memory consumption (over 20GB for building
the index with 90,000 records), preventing tests with large datasets.
Now ensure that vectors are released appropriately during the insert and
search functions. Note there are still some vectors that need to be
cleaned up after search query completion. Needs to be addressed in a
future commit.
All new code of the whole pull request, including one or several files
that are either new files or modified ones, are contributed under the
BSD-new license. I am contributing on behalf of my employer Amazon Web
Services, Inc.
As well as the commit:
Introduce session variables to manage HNSW index parameters
Three variables:
hnsw_max_connection_per_layer
hnsw_ef_constructor
hnsw_ef_search
ann-benchmark tool is also updated to support these variables in commit
https://github.com/HugoWenTD/ann-benchmarks/commit/e09784e for branch
https://github.com/HugoWenTD/ann-benchmarks/tree/mariadb-configurable
All new code of the whole pull request, including one or several files
that are either new files or modified ones, are contributed under the
BSD-new license. I am contributing on behalf of my employer Amazon Web
Services, Inc.
Co-authored-by: Hugo Wen <wenhug@amazon.com>
2024-02-17 17:03:30 +02:00
|
|
|
|
mhnsw: inter-statement shared cache
* preserve the graph in memory between statements
* keep it in a TABLE_SHARE, available for concurrent searches
* nodes are generally read-only, walking the graph doesn't change them
* distance to target is cached, calculated only once
* SIMD-optimized bloom filter detects visited nodes
* nodes are stored in an array, not List, to better utilize bloom filter
* auto-adjusting heuristic to estimate the number of visited nodes
(to configure the bloom filter)
* many threads can concurrently walk the graph. MEM_ROOT and Hash_set
are protected with a mutex, but walking doesn't need them
* up to 8 threads can concurrently load nodes into the cache,
nodes are partitioned into 8 mutexes (8 is chosen arbitrarily, might
need tuning)
* concurrent editing is not supported though
* this is fine for MyISAM, TL_WRITE protects the TABLE_SHARE and the
graph (note that TL_WRITE_CONCURRENT_INSERT is not allowed, because an
INSERT into the main table means multiple UPDATEs in the graph)
* InnoDB uses secondary transaction-level caches linked in a list in
in thd->ha_data via a fake handlerton
* on rollback the secondary cache is discarded, on commit nodes
from the secondary cache are invalidated in the shared cache
while it is exclusively locked
* on savepoint rollback both caches are flushed. this can be improved
in the future with a row visibility callback
* graph size is controlled by @@mhnsw_cache_size, the cache is flushed
when it reaches the threshold
2024-07-17 17:16:28 +02:00
|
|
|
size_t FVectorNode::tref_len() const { return ctx->tref_len; }
|
|
|
|
size_t FVectorNode::gref_len() const { return ctx->gref_len; }
|
|
|
|
uchar *FVectorNode::gref() const { return (uchar*)(this+1); }
|
|
|
|
uchar *FVectorNode::tref() const { return gref() + gref_len(); }
|
2024-06-05 12:12:28 +02:00
|
|
|
|
2024-06-04 14:47:52 +02:00
|
|
|
uchar *FVectorNode::get_key(const FVectorNode *elem, size_t *key_len, my_bool)
|
|
|
|
{
|
mhnsw: inter-statement shared cache
* preserve the graph in memory between statements
* keep it in a TABLE_SHARE, available for concurrent searches
* nodes are generally read-only, walking the graph doesn't change them
* distance to target is cached, calculated only once
* SIMD-optimized bloom filter detects visited nodes
* nodes are stored in an array, not List, to better utilize bloom filter
* auto-adjusting heuristic to estimate the number of visited nodes
(to configure the bloom filter)
* many threads can concurrently walk the graph. MEM_ROOT and Hash_set
are protected with a mutex, but walking doesn't need them
* up to 8 threads can concurrently load nodes into the cache,
nodes are partitioned into 8 mutexes (8 is chosen arbitrarily, might
need tuning)
* concurrent editing is not supported though
* this is fine for MyISAM, TL_WRITE protects the TABLE_SHARE and the
graph (note that TL_WRITE_CONCURRENT_INSERT is not allowed, because an
INSERT into the main table means multiple UPDATEs in the graph)
* InnoDB uses secondary transaction-level caches linked in a list in
in thd->ha_data via a fake handlerton
* on rollback the secondary cache is discarded, on commit nodes
from the secondary cache are invalidated in the shared cache
while it is exclusively locked
* on savepoint rollback both caches are flushed. this can be improved
in the future with a row visibility callback
* graph size is controlled by @@mhnsw_cache_size, the cache is flushed
when it reaches the threshold
2024-07-17 17:16:28 +02:00
|
|
|
*key_len= elem->gref_len();
|
|
|
|
return elem->gref();
|
2024-06-04 14:47:52 +02:00
|
|
|
}
|
|
|
|
|
mhnsw: inter-statement shared cache
* preserve the graph in memory between statements
* keep it in a TABLE_SHARE, available for concurrent searches
* nodes are generally read-only, walking the graph doesn't change them
* distance to target is cached, calculated only once
* SIMD-optimized bloom filter detects visited nodes
* nodes are stored in an array, not List, to better utilize bloom filter
* auto-adjusting heuristic to estimate the number of visited nodes
(to configure the bloom filter)
* many threads can concurrently walk the graph. MEM_ROOT and Hash_set
are protected with a mutex, but walking doesn't need them
* up to 8 threads can concurrently load nodes into the cache,
nodes are partitioned into 8 mutexes (8 is chosen arbitrarily, might
need tuning)
* concurrent editing is not supported though
* this is fine for MyISAM, TL_WRITE protects the TABLE_SHARE and the
graph (note that TL_WRITE_CONCURRENT_INSERT is not allowed, because an
INSERT into the main table means multiple UPDATEs in the graph)
* InnoDB uses secondary transaction-level caches linked in a list in
in thd->ha_data via a fake handlerton
* on rollback the secondary cache is discarded, on commit nodes
from the secondary cache are invalidated in the shared cache
while it is exclusively locked
* on savepoint rollback both caches are flushed. this can be improved
in the future with a row visibility callback
* graph size is controlled by @@mhnsw_cache_size, the cache is flushed
when it reaches the threshold
2024-07-17 17:16:28 +02:00
|
|
|
/* one visited node during the search. caches the distance to target */
|
|
|
|
struct Visited : public Sql_alloc
|
2024-06-04 14:47:52 +02:00
|
|
|
{
|
mhnsw: inter-statement shared cache
* preserve the graph in memory between statements
* keep it in a TABLE_SHARE, available for concurrent searches
* nodes are generally read-only, walking the graph doesn't change them
* distance to target is cached, calculated only once
* SIMD-optimized bloom filter detects visited nodes
* nodes are stored in an array, not List, to better utilize bloom filter
* auto-adjusting heuristic to estimate the number of visited nodes
(to configure the bloom filter)
* many threads can concurrently walk the graph. MEM_ROOT and Hash_set
are protected with a mutex, but walking doesn't need them
* up to 8 threads can concurrently load nodes into the cache,
nodes are partitioned into 8 mutexes (8 is chosen arbitrarily, might
need tuning)
* concurrent editing is not supported though
* this is fine for MyISAM, TL_WRITE protects the TABLE_SHARE and the
graph (note that TL_WRITE_CONCURRENT_INSERT is not allowed, because an
INSERT into the main table means multiple UPDATEs in the graph)
* InnoDB uses secondary transaction-level caches linked in a list in
in thd->ha_data via a fake handlerton
* on rollback the secondary cache is discarded, on commit nodes
from the secondary cache are invalidated in the shared cache
while it is exclusively locked
* on savepoint rollback both caches are flushed. this can be improved
in the future with a row visibility callback
* graph size is controlled by @@mhnsw_cache_size, the cache is flushed
when it reaches the threshold
2024-07-17 17:16:28 +02:00
|
|
|
FVectorNode *node;
|
|
|
|
const float distance_to_target;
|
|
|
|
Visited(FVectorNode *n, float d) : node(n), distance_to_target(d) {}
|
|
|
|
static int cmp(void *, const Visited* a, const Visited *b)
|
2024-06-04 14:47:52 +02:00
|
|
|
{
|
mhnsw: inter-statement shared cache
* preserve the graph in memory between statements
* keep it in a TABLE_SHARE, available for concurrent searches
* nodes are generally read-only, walking the graph doesn't change them
* distance to target is cached, calculated only once
* SIMD-optimized bloom filter detects visited nodes
* nodes are stored in an array, not List, to better utilize bloom filter
* auto-adjusting heuristic to estimate the number of visited nodes
(to configure the bloom filter)
* many threads can concurrently walk the graph. MEM_ROOT and Hash_set
are protected with a mutex, but walking doesn't need them
* up to 8 threads can concurrently load nodes into the cache,
nodes are partitioned into 8 mutexes (8 is chosen arbitrarily, might
need tuning)
* concurrent editing is not supported though
* this is fine for MyISAM, TL_WRITE protects the TABLE_SHARE and the
graph (note that TL_WRITE_CONCURRENT_INSERT is not allowed, because an
INSERT into the main table means multiple UPDATEs in the graph)
* InnoDB uses secondary transaction-level caches linked in a list in
in thd->ha_data via a fake handlerton
* on rollback the secondary cache is discarded, on commit nodes
from the secondary cache are invalidated in the shared cache
while it is exclusively locked
* on savepoint rollback both caches are flushed. this can be improved
in the future with a row visibility callback
* graph size is controlled by @@mhnsw_cache_size, the cache is flushed
when it reaches the threshold
2024-07-17 17:16:28 +02:00
|
|
|
return a->distance_to_target < b->distance_to_target ? -1 :
|
|
|
|
a->distance_to_target > b->distance_to_target ? 1 : 0;
|
Initial HNSW implementation
This commit includes the work done in collaboration with Hugo Wen from
Amazon:
MDEV-33408 Alter HNSW graph storage and fix memory leak
This commit changes the way HNSW graph information is stored in the
second table. Instead of storing connections as separate records, it now
stores neighbors for each node, leading to significant performance
improvements and storage savings.
Comparing with the previous approach, the insert speed is 5 times faster,
search speed improves by 23%, and storage usage is reduced by 73%, based
on ann-benchmark tests with random-xs-20-euclidean and
random-s-100-euclidean datasets.
Additionally, in previous code, vector objects were not released after
use, resulting in excessive memory consumption (over 20GB for building
the index with 90,000 records), preventing tests with large datasets.
Now ensure that vectors are released appropriately during the insert and
search functions. Note there are still some vectors that need to be
cleaned up after search query completion. Needs to be addressed in a
future commit.
All new code of the whole pull request, including one or several files
that are either new files or modified ones, are contributed under the
BSD-new license. I am contributing on behalf of my employer Amazon Web
Services, Inc.
As well as the commit:
Introduce session variables to manage HNSW index parameters
Three variables:
hnsw_max_connection_per_layer
hnsw_ef_constructor
hnsw_ef_search
ann-benchmark tool is also updated to support these variables in commit
https://github.com/HugoWenTD/ann-benchmarks/commit/e09784e for branch
https://github.com/HugoWenTD/ann-benchmarks/tree/mariadb-configurable
All new code of the whole pull request, including one or several files
that are either new files or modified ones, are contributed under the
BSD-new license. I am contributing on behalf of my employer Amazon Web
Services, Inc.
Co-authored-by: Hugo Wen <wenhug@amazon.com>
2024-02-17 17:03:30 +02:00
|
|
|
}
|
mhnsw: inter-statement shared cache
* preserve the graph in memory between statements
* keep it in a TABLE_SHARE, available for concurrent searches
* nodes are generally read-only, walking the graph doesn't change them
* distance to target is cached, calculated only once
* SIMD-optimized bloom filter detects visited nodes
* nodes are stored in an array, not List, to better utilize bloom filter
* auto-adjusting heuristic to estimate the number of visited nodes
(to configure the bloom filter)
* many threads can concurrently walk the graph. MEM_ROOT and Hash_set
are protected with a mutex, but walking doesn't need them
* up to 8 threads can concurrently load nodes into the cache,
nodes are partitioned into 8 mutexes (8 is chosen arbitrarily, might
need tuning)
* concurrent editing is not supported though
* this is fine for MyISAM, TL_WRITE protects the TABLE_SHARE and the
graph (note that TL_WRITE_CONCURRENT_INSERT is not allowed, because an
INSERT into the main table means multiple UPDATEs in the graph)
* InnoDB uses secondary transaction-level caches linked in a list in
in thd->ha_data via a fake handlerton
* on rollback the secondary cache is discarded, on commit nodes
from the secondary cache are invalidated in the shared cache
while it is exclusively locked
* on savepoint rollback both caches are flushed. this can be improved
in the future with a row visibility callback
* graph size is controlled by @@mhnsw_cache_size, the cache is flushed
when it reaches the threshold
2024-07-17 17:16:28 +02:00
|
|
|
};
|
|
|
|
|
|
|
|
/*
|
|
|
|
a factory to create Visited and keep track of already seen nodes
|
Initial HNSW implementation
This commit includes the work done in collaboration with Hugo Wen from
Amazon:
MDEV-33408 Alter HNSW graph storage and fix memory leak
This commit changes the way HNSW graph information is stored in the
second table. Instead of storing connections as separate records, it now
stores neighbors for each node, leading to significant performance
improvements and storage savings.
Comparing with the previous approach, the insert speed is 5 times faster,
search speed improves by 23%, and storage usage is reduced by 73%, based
on ann-benchmark tests with random-xs-20-euclidean and
random-s-100-euclidean datasets.
Additionally, in previous code, vector objects were not released after
use, resulting in excessive memory consumption (over 20GB for building
the index with 90,000 records), preventing tests with large datasets.
Now ensure that vectors are released appropriately during the insert and
search functions. Note there are still some vectors that need to be
cleaned up after search query completion. Needs to be addressed in a
future commit.
All new code of the whole pull request, including one or several files
that are either new files or modified ones, are contributed under the
BSD-new license. I am contributing on behalf of my employer Amazon Web
Services, Inc.
As well as the commit:
Introduce session variables to manage HNSW index parameters
Three variables:
hnsw_max_connection_per_layer
hnsw_ef_constructor
hnsw_ef_search
ann-benchmark tool is also updated to support these variables in commit
https://github.com/HugoWenTD/ann-benchmarks/commit/e09784e for branch
https://github.com/HugoWenTD/ann-benchmarks/tree/mariadb-configurable
All new code of the whole pull request, including one or several files
that are either new files or modified ones, are contributed under the
BSD-new license. I am contributing on behalf of my employer Amazon Web
Services, Inc.
Co-authored-by: Hugo Wen <wenhug@amazon.com>
2024-02-17 17:03:30 +02:00
|
|
|
|
mhnsw: inter-statement shared cache
* preserve the graph in memory between statements
* keep it in a TABLE_SHARE, available for concurrent searches
* nodes are generally read-only, walking the graph doesn't change them
* distance to target is cached, calculated only once
* SIMD-optimized bloom filter detects visited nodes
* nodes are stored in an array, not List, to better utilize bloom filter
* auto-adjusting heuristic to estimate the number of visited nodes
(to configure the bloom filter)
* many threads can concurrently walk the graph. MEM_ROOT and Hash_set
are protected with a mutex, but walking doesn't need them
* up to 8 threads can concurrently load nodes into the cache,
nodes are partitioned into 8 mutexes (8 is chosen arbitrarily, might
need tuning)
* concurrent editing is not supported though
* this is fine for MyISAM, TL_WRITE protects the TABLE_SHARE and the
graph (note that TL_WRITE_CONCURRENT_INSERT is not allowed, because an
INSERT into the main table means multiple UPDATEs in the graph)
* InnoDB uses secondary transaction-level caches linked in a list in
in thd->ha_data via a fake handlerton
* on rollback the secondary cache is discarded, on commit nodes
from the secondary cache are invalidated in the shared cache
while it is exclusively locked
* on savepoint rollback both caches are flushed. this can be improved
in the future with a row visibility callback
* graph size is controlled by @@mhnsw_cache_size, the cache is flushed
when it reaches the threshold
2024-07-17 17:16:28 +02:00
|
|
|
note that PatternedSimdBloomFilter works in blocks of 8 elements,
|
|
|
|
so on insert they're accumulated in nodes[], on search the caller
|
|
|
|
provides 8 addresses at once. we record 0x0 as "seen" so that
|
|
|
|
the caller could pad the input with nullptr's
|
|
|
|
*/
|
|
|
|
class VisitedSet
|
Initial HNSW implementation
This commit includes the work done in collaboration with Hugo Wen from
Amazon:
MDEV-33408 Alter HNSW graph storage and fix memory leak
This commit changes the way HNSW graph information is stored in the
second table. Instead of storing connections as separate records, it now
stores neighbors for each node, leading to significant performance
improvements and storage savings.
Comparing with the previous approach, the insert speed is 5 times faster,
search speed improves by 23%, and storage usage is reduced by 73%, based
on ann-benchmark tests with random-xs-20-euclidean and
random-s-100-euclidean datasets.
Additionally, in previous code, vector objects were not released after
use, resulting in excessive memory consumption (over 20GB for building
the index with 90,000 records), preventing tests with large datasets.
Now ensure that vectors are released appropriately during the insert and
search functions. Note there are still some vectors that need to be
cleaned up after search query completion. Needs to be addressed in a
future commit.
All new code of the whole pull request, including one or several files
that are either new files or modified ones, are contributed under the
BSD-new license. I am contributing on behalf of my employer Amazon Web
Services, Inc.
As well as the commit:
Introduce session variables to manage HNSW index parameters
Three variables:
hnsw_max_connection_per_layer
hnsw_ef_constructor
hnsw_ef_search
ann-benchmark tool is also updated to support these variables in commit
https://github.com/HugoWenTD/ann-benchmarks/commit/e09784e for branch
https://github.com/HugoWenTD/ann-benchmarks/tree/mariadb-configurable
All new code of the whole pull request, including one or several files
that are either new files or modified ones, are contributed under the
BSD-new license. I am contributing on behalf of my employer Amazon Web
Services, Inc.
Co-authored-by: Hugo Wen <wenhug@amazon.com>
2024-02-17 17:03:30 +02:00
|
|
|
{
|
mhnsw: inter-statement shared cache
* preserve the graph in memory between statements
* keep it in a TABLE_SHARE, available for concurrent searches
* nodes are generally read-only, walking the graph doesn't change them
* distance to target is cached, calculated only once
* SIMD-optimized bloom filter detects visited nodes
* nodes are stored in an array, not List, to better utilize bloom filter
* auto-adjusting heuristic to estimate the number of visited nodes
(to configure the bloom filter)
* many threads can concurrently walk the graph. MEM_ROOT and Hash_set
are protected with a mutex, but walking doesn't need them
* up to 8 threads can concurrently load nodes into the cache,
nodes are partitioned into 8 mutexes (8 is chosen arbitrarily, might
need tuning)
* concurrent editing is not supported though
* this is fine for MyISAM, TL_WRITE protects the TABLE_SHARE and the
graph (note that TL_WRITE_CONCURRENT_INSERT is not allowed, because an
INSERT into the main table means multiple UPDATEs in the graph)
* InnoDB uses secondary transaction-level caches linked in a list in
in thd->ha_data via a fake handlerton
* on rollback the secondary cache is discarded, on commit nodes
from the secondary cache are invalidated in the shared cache
while it is exclusively locked
* on savepoint rollback both caches are flushed. this can be improved
in the future with a row visibility callback
* graph size is controlled by @@mhnsw_cache_size, the cache is flushed
when it reaches the threshold
2024-07-17 17:16:28 +02:00
|
|
|
MEM_ROOT *root;
|
2024-07-19 12:25:25 +02:00
|
|
|
const FVector *target;
|
mhnsw: inter-statement shared cache
* preserve the graph in memory between statements
* keep it in a TABLE_SHARE, available for concurrent searches
* nodes are generally read-only, walking the graph doesn't change them
* distance to target is cached, calculated only once
* SIMD-optimized bloom filter detects visited nodes
* nodes are stored in an array, not List, to better utilize bloom filter
* auto-adjusting heuristic to estimate the number of visited nodes
(to configure the bloom filter)
* many threads can concurrently walk the graph. MEM_ROOT and Hash_set
are protected with a mutex, but walking doesn't need them
* up to 8 threads can concurrently load nodes into the cache,
nodes are partitioned into 8 mutexes (8 is chosen arbitrarily, might
need tuning)
* concurrent editing is not supported though
* this is fine for MyISAM, TL_WRITE protects the TABLE_SHARE and the
graph (note that TL_WRITE_CONCURRENT_INSERT is not allowed, because an
INSERT into the main table means multiple UPDATEs in the graph)
* InnoDB uses secondary transaction-level caches linked in a list in
in thd->ha_data via a fake handlerton
* on rollback the secondary cache is discarded, on commit nodes
from the secondary cache are invalidated in the shared cache
while it is exclusively locked
* on savepoint rollback both caches are flushed. this can be improved
in the future with a row visibility callback
* graph size is controlled by @@mhnsw_cache_size, the cache is flushed
when it reaches the threshold
2024-07-17 17:16:28 +02:00
|
|
|
PatternedSimdBloomFilter<FVectorNode> map;
|
|
|
|
const FVectorNode *nodes[8]= {0,0,0,0,0,0,0,0};
|
|
|
|
size_t idx= 1; // to record 0 in the filter
|
|
|
|
public:
|
|
|
|
uint count= 0;
|
2024-07-19 12:25:25 +02:00
|
|
|
VisitedSet(MEM_ROOT *root, const FVector *target, uint size) :
|
mhnsw: inter-statement shared cache
* preserve the graph in memory between statements
* keep it in a TABLE_SHARE, available for concurrent searches
* nodes are generally read-only, walking the graph doesn't change them
* distance to target is cached, calculated only once
* SIMD-optimized bloom filter detects visited nodes
* nodes are stored in an array, not List, to better utilize bloom filter
* auto-adjusting heuristic to estimate the number of visited nodes
(to configure the bloom filter)
* many threads can concurrently walk the graph. MEM_ROOT and Hash_set
are protected with a mutex, but walking doesn't need them
* up to 8 threads can concurrently load nodes into the cache,
nodes are partitioned into 8 mutexes (8 is chosen arbitrarily, might
need tuning)
* concurrent editing is not supported though
* this is fine for MyISAM, TL_WRITE protects the TABLE_SHARE and the
graph (note that TL_WRITE_CONCURRENT_INSERT is not allowed, because an
INSERT into the main table means multiple UPDATEs in the graph)
* InnoDB uses secondary transaction-level caches linked in a list in
in thd->ha_data via a fake handlerton
* on rollback the secondary cache is discarded, on commit nodes
from the secondary cache are invalidated in the shared cache
while it is exclusively locked
* on savepoint rollback both caches are flushed. this can be improved
in the future with a row visibility callback
* graph size is controlled by @@mhnsw_cache_size, the cache is flushed
when it reaches the threshold
2024-07-17 17:16:28 +02:00
|
|
|
root(root), target(target), map(size, 0.01f) {}
|
|
|
|
Visited *create(FVectorNode *node)
|
|
|
|
{
|
|
|
|
auto *v= new (root) Visited(node, node->distance_to(target));
|
|
|
|
insert(node);
|
|
|
|
count++;
|
|
|
|
return v;
|
|
|
|
}
|
|
|
|
void insert(const FVectorNode *n)
|
|
|
|
{
|
|
|
|
nodes[idx++]= n;
|
|
|
|
if (idx == 8) flush();
|
|
|
|
}
|
|
|
|
void flush() {
|
|
|
|
if (idx) map.Insert(nodes);
|
|
|
|
idx=0;
|
|
|
|
}
|
|
|
|
uint8_t seen(FVectorNode **nodes) { return map.Query(nodes); }
|
|
|
|
};
|
Initial HNSW implementation
This commit includes the work done in collaboration with Hugo Wen from
Amazon:
MDEV-33408 Alter HNSW graph storage and fix memory leak
This commit changes the way HNSW graph information is stored in the
second table. Instead of storing connections as separate records, it now
stores neighbors for each node, leading to significant performance
improvements and storage savings.
Comparing with the previous approach, the insert speed is 5 times faster,
search speed improves by 23%, and storage usage is reduced by 73%, based
on ann-benchmark tests with random-xs-20-euclidean and
random-s-100-euclidean datasets.
Additionally, in previous code, vector objects were not released after
use, resulting in excessive memory consumption (over 20GB for building
the index with 90,000 records), preventing tests with large datasets.
Now ensure that vectors are released appropriately during the insert and
search functions. Note there are still some vectors that need to be
cleaned up after search query completion. Needs to be addressed in a
future commit.
All new code of the whole pull request, including one or several files
that are either new files or modified ones, are contributed under the
BSD-new license. I am contributing on behalf of my employer Amazon Web
Services, Inc.
As well as the commit:
Introduce session variables to manage HNSW index parameters
Three variables:
hnsw_max_connection_per_layer
hnsw_ef_constructor
hnsw_ef_search
ann-benchmark tool is also updated to support these variables in commit
https://github.com/HugoWenTD/ann-benchmarks/commit/e09784e for branch
https://github.com/HugoWenTD/ann-benchmarks/tree/mariadb-configurable
All new code of the whole pull request, including one or several files
that are either new files or modified ones, are contributed under the
BSD-new license. I am contributing on behalf of my employer Amazon Web
Services, Inc.
Co-authored-by: Hugo Wen <wenhug@amazon.com>
2024-02-17 17:03:30 +02:00
|
|
|
|
|
|
|
|
mhnsw: inter-statement shared cache
* preserve the graph in memory between statements
* keep it in a TABLE_SHARE, available for concurrent searches
* nodes are generally read-only, walking the graph doesn't change them
* distance to target is cached, calculated only once
* SIMD-optimized bloom filter detects visited nodes
* nodes are stored in an array, not List, to better utilize bloom filter
* auto-adjusting heuristic to estimate the number of visited nodes
(to configure the bloom filter)
* many threads can concurrently walk the graph. MEM_ROOT and Hash_set
are protected with a mutex, but walking doesn't need them
* up to 8 threads can concurrently load nodes into the cache,
nodes are partitioned into 8 mutexes (8 is chosen arbitrarily, might
need tuning)
* concurrent editing is not supported though
* this is fine for MyISAM, TL_WRITE protects the TABLE_SHARE and the
graph (note that TL_WRITE_CONCURRENT_INSERT is not allowed, because an
INSERT into the main table means multiple UPDATEs in the graph)
* InnoDB uses secondary transaction-level caches linked in a list in
in thd->ha_data via a fake handlerton
* on rollback the secondary cache is discarded, on commit nodes
from the secondary cache are invalidated in the shared cache
while it is exclusively locked
* on savepoint rollback both caches are flushed. this can be improved
in the future with a row visibility callback
* graph size is controlled by @@mhnsw_cache_size, the cache is flushed
when it reaches the threshold
2024-07-17 17:16:28 +02:00
|
|
|
/*
|
|
|
|
selects best neighbors from the list of candidates plus one extra candidate
|
|
|
|
|
|
|
|
one extra candidate is specified separately to avoid appending it to
|
|
|
|
the Neighborhood candidates, which might be already at its max size.
|
|
|
|
*/
|
|
|
|
static int select_neighbors(MHNSW_Context *ctx, TABLE *graph, size_t layer,
|
|
|
|
FVectorNode &target, const Neighborhood &candidates,
|
|
|
|
FVectorNode *extra_candidate,
|
2024-06-05 13:39:33 +02:00
|
|
|
size_t max_neighbor_connections)
|
Initial HNSW implementation
This commit includes the work done in collaboration with Hugo Wen from
Amazon:
MDEV-33408 Alter HNSW graph storage and fix memory leak
This commit changes the way HNSW graph information is stored in the
second table. Instead of storing connections as separate records, it now
stores neighbors for each node, leading to significant performance
improvements and storage savings.
Comparing with the previous approach, the insert speed is 5 times faster,
search speed improves by 23%, and storage usage is reduced by 73%, based
on ann-benchmark tests with random-xs-20-euclidean and
random-s-100-euclidean datasets.
Additionally, in previous code, vector objects were not released after
use, resulting in excessive memory consumption (over 20GB for building
the index with 90,000 records), preventing tests with large datasets.
Now ensure that vectors are released appropriately during the insert and
search functions. Note there are still some vectors that need to be
cleaned up after search query completion. Needs to be addressed in a
future commit.
All new code of the whole pull request, including one or several files
that are either new files or modified ones, are contributed under the
BSD-new license. I am contributing on behalf of my employer Amazon Web
Services, Inc.
As well as the commit:
Introduce session variables to manage HNSW index parameters
Three variables:
hnsw_max_connection_per_layer
hnsw_ef_constructor
hnsw_ef_search
ann-benchmark tool is also updated to support these variables in commit
https://github.com/HugoWenTD/ann-benchmarks/commit/e09784e for branch
https://github.com/HugoWenTD/ann-benchmarks/tree/mariadb-configurable
All new code of the whole pull request, including one or several files
that are either new files or modified ones, are contributed under the
BSD-new license. I am contributing on behalf of my employer Amazon Web
Services, Inc.
Co-authored-by: Hugo Wen <wenhug@amazon.com>
2024-02-17 17:03:30 +02:00
|
|
|
{
|
mhnsw: inter-statement shared cache
* preserve the graph in memory between statements
* keep it in a TABLE_SHARE, available for concurrent searches
* nodes are generally read-only, walking the graph doesn't change them
* distance to target is cached, calculated only once
* SIMD-optimized bloom filter detects visited nodes
* nodes are stored in an array, not List, to better utilize bloom filter
* auto-adjusting heuristic to estimate the number of visited nodes
(to configure the bloom filter)
* many threads can concurrently walk the graph. MEM_ROOT and Hash_set
are protected with a mutex, but walking doesn't need them
* up to 8 threads can concurrently load nodes into the cache,
nodes are partitioned into 8 mutexes (8 is chosen arbitrarily, might
need tuning)
* concurrent editing is not supported though
* this is fine for MyISAM, TL_WRITE protects the TABLE_SHARE and the
graph (note that TL_WRITE_CONCURRENT_INSERT is not allowed, because an
INSERT into the main table means multiple UPDATEs in the graph)
* InnoDB uses secondary transaction-level caches linked in a list in
in thd->ha_data via a fake handlerton
* on rollback the secondary cache is discarded, on commit nodes
from the secondary cache are invalidated in the shared cache
while it is exclusively locked
* on savepoint rollback both caches are flushed. this can be improved
in the future with a row visibility callback
* graph size is controlled by @@mhnsw_cache_size, the cache is flushed
when it reaches the threshold
2024-07-17 17:16:28 +02:00
|
|
|
Queue<Visited> pq; // working queue
|
2024-06-12 17:12:20 +02:00
|
|
|
|
mhnsw: inter-statement shared cache
* preserve the graph in memory between statements
* keep it in a TABLE_SHARE, available for concurrent searches
* nodes are generally read-only, walking the graph doesn't change them
* distance to target is cached, calculated only once
* SIMD-optimized bloom filter detects visited nodes
* nodes are stored in an array, not List, to better utilize bloom filter
* auto-adjusting heuristic to estimate the number of visited nodes
(to configure the bloom filter)
* many threads can concurrently walk the graph. MEM_ROOT and Hash_set
are protected with a mutex, but walking doesn't need them
* up to 8 threads can concurrently load nodes into the cache,
nodes are partitioned into 8 mutexes (8 is chosen arbitrarily, might
need tuning)
* concurrent editing is not supported though
* this is fine for MyISAM, TL_WRITE protects the TABLE_SHARE and the
graph (note that TL_WRITE_CONCURRENT_INSERT is not allowed, because an
INSERT into the main table means multiple UPDATEs in the graph)
* InnoDB uses secondary transaction-level caches linked in a list in
in thd->ha_data via a fake handlerton
* on rollback the secondary cache is discarded, on commit nodes
from the secondary cache are invalidated in the shared cache
while it is exclusively locked
* on savepoint rollback both caches are flushed. this can be improved
in the future with a row visibility callback
* graph size is controlled by @@mhnsw_cache_size, the cache is flushed
when it reaches the threshold
2024-07-17 17:16:28 +02:00
|
|
|
if (pq.init(10000, false, Visited::cmp))
|
|
|
|
return my_errno= HA_ERR_OUT_OF_MEM;
|
Initial HNSW implementation
This commit includes the work done in collaboration with Hugo Wen from
Amazon:
MDEV-33408 Alter HNSW graph storage and fix memory leak
This commit changes the way HNSW graph information is stored in the
second table. Instead of storing connections as separate records, it now
stores neighbors for each node, leading to significant performance
improvements and storage savings.
Comparing with the previous approach, the insert speed is 5 times faster,
search speed improves by 23%, and storage usage is reduced by 73%, based
on ann-benchmark tests with random-xs-20-euclidean and
random-s-100-euclidean datasets.
Additionally, in previous code, vector objects were not released after
use, resulting in excessive memory consumption (over 20GB for building
the index with 90,000 records), preventing tests with large datasets.
Now ensure that vectors are released appropriately during the insert and
search functions. Note there are still some vectors that need to be
cleaned up after search query completion. Needs to be addressed in a
future commit.
All new code of the whole pull request, including one or several files
that are either new files or modified ones, are contributed under the
BSD-new license. I am contributing on behalf of my employer Amazon Web
Services, Inc.
As well as the commit:
Introduce session variables to manage HNSW index parameters
Three variables:
hnsw_max_connection_per_layer
hnsw_ef_constructor
hnsw_ef_search
ann-benchmark tool is also updated to support these variables in commit
https://github.com/HugoWenTD/ann-benchmarks/commit/e09784e for branch
https://github.com/HugoWenTD/ann-benchmarks/tree/mariadb-configurable
All new code of the whole pull request, including one or several files
that are either new files or modified ones, are contributed under the
BSD-new license. I am contributing on behalf of my employer Amazon Web
Services, Inc.
Co-authored-by: Hugo Wen <wenhug@amazon.com>
2024-02-17 17:03:30 +02:00
|
|
|
|
mhnsw: inter-statement shared cache
* preserve the graph in memory between statements
* keep it in a TABLE_SHARE, available for concurrent searches
* nodes are generally read-only, walking the graph doesn't change them
* distance to target is cached, calculated only once
* SIMD-optimized bloom filter detects visited nodes
* nodes are stored in an array, not List, to better utilize bloom filter
* auto-adjusting heuristic to estimate the number of visited nodes
(to configure the bloom filter)
* many threads can concurrently walk the graph. MEM_ROOT and Hash_set
are protected with a mutex, but walking doesn't need them
* up to 8 threads can concurrently load nodes into the cache,
nodes are partitioned into 8 mutexes (8 is chosen arbitrarily, might
need tuning)
* concurrent editing is not supported though
* this is fine for MyISAM, TL_WRITE protects the TABLE_SHARE and the
graph (note that TL_WRITE_CONCURRENT_INSERT is not allowed, because an
INSERT into the main table means multiple UPDATEs in the graph)
* InnoDB uses secondary transaction-level caches linked in a list in
in thd->ha_data via a fake handlerton
* on rollback the secondary cache is discarded, on commit nodes
from the secondary cache are invalidated in the shared cache
while it is exclusively locked
* on savepoint rollback both caches are flushed. this can be improved
in the future with a row visibility callback
* graph size is controlled by @@mhnsw_cache_size, the cache is flushed
when it reaches the threshold
2024-07-17 17:16:28 +02:00
|
|
|
MEM_ROOT * const root= graph->in_use->mem_root;
|
|
|
|
auto discarded= (Visited**)my_safe_alloca(sizeof(Visited**)*max_neighbor_connections);
|
|
|
|
size_t discarded_num= 0;
|
|
|
|
Neighborhood &neighbors= target.neighbors[layer];
|
Initial HNSW implementation
This commit includes the work done in collaboration with Hugo Wen from
Amazon:
MDEV-33408 Alter HNSW graph storage and fix memory leak
This commit changes the way HNSW graph information is stored in the
second table. Instead of storing connections as separate records, it now
stores neighbors for each node, leading to significant performance
improvements and storage savings.
Comparing with the previous approach, the insert speed is 5 times faster,
search speed improves by 23%, and storage usage is reduced by 73%, based
on ann-benchmark tests with random-xs-20-euclidean and
random-s-100-euclidean datasets.
Additionally, in previous code, vector objects were not released after
use, resulting in excessive memory consumption (over 20GB for building
the index with 90,000 records), preventing tests with large datasets.
Now ensure that vectors are released appropriately during the insert and
search functions. Note there are still some vectors that need to be
cleaned up after search query completion. Needs to be addressed in a
future commit.
All new code of the whole pull request, including one or several files
that are either new files or modified ones, are contributed under the
BSD-new license. I am contributing on behalf of my employer Amazon Web
Services, Inc.
As well as the commit:
Introduce session variables to manage HNSW index parameters
Three variables:
hnsw_max_connection_per_layer
hnsw_ef_constructor
hnsw_ef_search
ann-benchmark tool is also updated to support these variables in commit
https://github.com/HugoWenTD/ann-benchmarks/commit/e09784e for branch
https://github.com/HugoWenTD/ann-benchmarks/tree/mariadb-configurable
All new code of the whole pull request, including one or several files
that are either new files or modified ones, are contributed under the
BSD-new license. I am contributing on behalf of my employer Amazon Web
Services, Inc.
Co-authored-by: Hugo Wen <wenhug@amazon.com>
2024-02-17 17:03:30 +02:00
|
|
|
|
mhnsw: inter-statement shared cache
* preserve the graph in memory between statements
* keep it in a TABLE_SHARE, available for concurrent searches
* nodes are generally read-only, walking the graph doesn't change them
* distance to target is cached, calculated only once
* SIMD-optimized bloom filter detects visited nodes
* nodes are stored in an array, not List, to better utilize bloom filter
* auto-adjusting heuristic to estimate the number of visited nodes
(to configure the bloom filter)
* many threads can concurrently walk the graph. MEM_ROOT and Hash_set
are protected with a mutex, but walking doesn't need them
* up to 8 threads can concurrently load nodes into the cache,
nodes are partitioned into 8 mutexes (8 is chosen arbitrarily, might
need tuning)
* concurrent editing is not supported though
* this is fine for MyISAM, TL_WRITE protects the TABLE_SHARE and the
graph (note that TL_WRITE_CONCURRENT_INSERT is not allowed, because an
INSERT into the main table means multiple UPDATEs in the graph)
* InnoDB uses secondary transaction-level caches linked in a list in
in thd->ha_data via a fake handlerton
* on rollback the secondary cache is discarded, on commit nodes
from the secondary cache are invalidated in the shared cache
while it is exclusively locked
* on savepoint rollback both caches are flushed. this can be improved
in the future with a row visibility callback
* graph size is controlled by @@mhnsw_cache_size, the cache is flushed
when it reaches the threshold
2024-07-17 17:16:28 +02:00
|
|
|
for (size_t i=0; i < candidates.num; i++)
|
Initial HNSW implementation
This commit includes the work done in collaboration with Hugo Wen from
Amazon:
MDEV-33408 Alter HNSW graph storage and fix memory leak
This commit changes the way HNSW graph information is stored in the
second table. Instead of storing connections as separate records, it now
stores neighbors for each node, leading to significant performance
improvements and storage savings.
Comparing with the previous approach, the insert speed is 5 times faster,
search speed improves by 23%, and storage usage is reduced by 73%, based
on ann-benchmark tests with random-xs-20-euclidean and
random-s-100-euclidean datasets.
Additionally, in previous code, vector objects were not released after
use, resulting in excessive memory consumption (over 20GB for building
the index with 90,000 records), preventing tests with large datasets.
Now ensure that vectors are released appropriately during the insert and
search functions. Note there are still some vectors that need to be
cleaned up after search query completion. Needs to be addressed in a
future commit.
All new code of the whole pull request, including one or several files
that are either new files or modified ones, are contributed under the
BSD-new license. I am contributing on behalf of my employer Amazon Web
Services, Inc.
As well as the commit:
Introduce session variables to manage HNSW index parameters
Three variables:
hnsw_max_connection_per_layer
hnsw_ef_constructor
hnsw_ef_search
ann-benchmark tool is also updated to support these variables in commit
https://github.com/HugoWenTD/ann-benchmarks/commit/e09784e for branch
https://github.com/HugoWenTD/ann-benchmarks/tree/mariadb-configurable
All new code of the whole pull request, including one or several files
that are either new files or modified ones, are contributed under the
BSD-new license. I am contributing on behalf of my employer Amazon Web
Services, Inc.
Co-authored-by: Hugo Wen <wenhug@amazon.com>
2024-02-17 17:03:30 +02:00
|
|
|
{
|
mhnsw: inter-statement shared cache
* preserve the graph in memory between statements
* keep it in a TABLE_SHARE, available for concurrent searches
* nodes are generally read-only, walking the graph doesn't change them
* distance to target is cached, calculated only once
* SIMD-optimized bloom filter detects visited nodes
* nodes are stored in an array, not List, to better utilize bloom filter
* auto-adjusting heuristic to estimate the number of visited nodes
(to configure the bloom filter)
* many threads can concurrently walk the graph. MEM_ROOT and Hash_set
are protected with a mutex, but walking doesn't need them
* up to 8 threads can concurrently load nodes into the cache,
nodes are partitioned into 8 mutexes (8 is chosen arbitrarily, might
need tuning)
* concurrent editing is not supported though
* this is fine for MyISAM, TL_WRITE protects the TABLE_SHARE and the
graph (note that TL_WRITE_CONCURRENT_INSERT is not allowed, because an
INSERT into the main table means multiple UPDATEs in the graph)
* InnoDB uses secondary transaction-level caches linked in a list in
in thd->ha_data via a fake handlerton
* on rollback the secondary cache is discarded, on commit nodes
from the secondary cache are invalidated in the shared cache
while it is exclusively locked
* on savepoint rollback both caches are flushed. this can be improved
in the future with a row visibility callback
* graph size is controlled by @@mhnsw_cache_size, the cache is flushed
when it reaches the threshold
2024-07-17 17:16:28 +02:00
|
|
|
FVectorNode *node= candidates.links[i];
|
|
|
|
if (int err= node->load(graph))
|
|
|
|
return err;
|
2024-07-19 12:25:25 +02:00
|
|
|
pq.push(new (root) Visited(node, node->distance_to(target.vec)));
|
Initial HNSW implementation
This commit includes the work done in collaboration with Hugo Wen from
Amazon:
MDEV-33408 Alter HNSW graph storage and fix memory leak
This commit changes the way HNSW graph information is stored in the
second table. Instead of storing connections as separate records, it now
stores neighbors for each node, leading to significant performance
improvements and storage savings.
Comparing with the previous approach, the insert speed is 5 times faster,
search speed improves by 23%, and storage usage is reduced by 73%, based
on ann-benchmark tests with random-xs-20-euclidean and
random-s-100-euclidean datasets.
Additionally, in previous code, vector objects were not released after
use, resulting in excessive memory consumption (over 20GB for building
the index with 90,000 records), preventing tests with large datasets.
Now ensure that vectors are released appropriately during the insert and
search functions. Note there are still some vectors that need to be
cleaned up after search query completion. Needs to be addressed in a
future commit.
All new code of the whole pull request, including one or several files
that are either new files or modified ones, are contributed under the
BSD-new license. I am contributing on behalf of my employer Amazon Web
Services, Inc.
As well as the commit:
Introduce session variables to manage HNSW index parameters
Three variables:
hnsw_max_connection_per_layer
hnsw_ef_constructor
hnsw_ef_search
ann-benchmark tool is also updated to support these variables in commit
https://github.com/HugoWenTD/ann-benchmarks/commit/e09784e for branch
https://github.com/HugoWenTD/ann-benchmarks/tree/mariadb-configurable
All new code of the whole pull request, including one or several files
that are either new files or modified ones, are contributed under the
BSD-new license. I am contributing on behalf of my employer Amazon Web
Services, Inc.
Co-authored-by: Hugo Wen <wenhug@amazon.com>
2024-02-17 17:03:30 +02:00
|
|
|
}
|
mhnsw: inter-statement shared cache
* preserve the graph in memory between statements
* keep it in a TABLE_SHARE, available for concurrent searches
* nodes are generally read-only, walking the graph doesn't change them
* distance to target is cached, calculated only once
* SIMD-optimized bloom filter detects visited nodes
* nodes are stored in an array, not List, to better utilize bloom filter
* auto-adjusting heuristic to estimate the number of visited nodes
(to configure the bloom filter)
* many threads can concurrently walk the graph. MEM_ROOT and Hash_set
are protected with a mutex, but walking doesn't need them
* up to 8 threads can concurrently load nodes into the cache,
nodes are partitioned into 8 mutexes (8 is chosen arbitrarily, might
need tuning)
* concurrent editing is not supported though
* this is fine for MyISAM, TL_WRITE protects the TABLE_SHARE and the
graph (note that TL_WRITE_CONCURRENT_INSERT is not allowed, because an
INSERT into the main table means multiple UPDATEs in the graph)
* InnoDB uses secondary transaction-level caches linked in a list in
in thd->ha_data via a fake handlerton
* on rollback the secondary cache is discarded, on commit nodes
from the secondary cache are invalidated in the shared cache
while it is exclusively locked
* on savepoint rollback both caches are flushed. this can be improved
in the future with a row visibility callback
* graph size is controlled by @@mhnsw_cache_size, the cache is flushed
when it reaches the threshold
2024-07-17 17:16:28 +02:00
|
|
|
if (extra_candidate)
|
2024-07-19 12:25:25 +02:00
|
|
|
pq.push(new (root) Visited(extra_candidate, extra_candidate->distance_to(target.vec)));
|
Initial HNSW implementation
This commit includes the work done in collaboration with Hugo Wen from
Amazon:
MDEV-33408 Alter HNSW graph storage and fix memory leak
This commit changes the way HNSW graph information is stored in the
second table. Instead of storing connections as separate records, it now
stores neighbors for each node, leading to significant performance
improvements and storage savings.
Comparing with the previous approach, the insert speed is 5 times faster,
search speed improves by 23%, and storage usage is reduced by 73%, based
on ann-benchmark tests with random-xs-20-euclidean and
random-s-100-euclidean datasets.
Additionally, in previous code, vector objects were not released after
use, resulting in excessive memory consumption (over 20GB for building
the index with 90,000 records), preventing tests with large datasets.
Now ensure that vectors are released appropriately during the insert and
search functions. Note there are still some vectors that need to be
cleaned up after search query completion. Needs to be addressed in a
future commit.
All new code of the whole pull request, including one or several files
that are either new files or modified ones, are contributed under the
BSD-new license. I am contributing on behalf of my employer Amazon Web
Services, Inc.
As well as the commit:
Introduce session variables to manage HNSW index parameters
Three variables:
hnsw_max_connection_per_layer
hnsw_ef_constructor
hnsw_ef_search
ann-benchmark tool is also updated to support these variables in commit
https://github.com/HugoWenTD/ann-benchmarks/commit/e09784e for branch
https://github.com/HugoWenTD/ann-benchmarks/tree/mariadb-configurable
All new code of the whole pull request, including one or several files
that are either new files or modified ones, are contributed under the
BSD-new license. I am contributing on behalf of my employer Amazon Web
Services, Inc.
Co-authored-by: Hugo Wen <wenhug@amazon.com>
2024-02-17 17:03:30 +02:00
|
|
|
|
|
|
|
DBUG_ASSERT(pq.elements());
|
mhnsw: inter-statement shared cache
* preserve the graph in memory between statements
* keep it in a TABLE_SHARE, available for concurrent searches
* nodes are generally read-only, walking the graph doesn't change them
* distance to target is cached, calculated only once
* SIMD-optimized bloom filter detects visited nodes
* nodes are stored in an array, not List, to better utilize bloom filter
* auto-adjusting heuristic to estimate the number of visited nodes
(to configure the bloom filter)
* many threads can concurrently walk the graph. MEM_ROOT and Hash_set
are protected with a mutex, but walking doesn't need them
* up to 8 threads can concurrently load nodes into the cache,
nodes are partitioned into 8 mutexes (8 is chosen arbitrarily, might
need tuning)
* concurrent editing is not supported though
* this is fine for MyISAM, TL_WRITE protects the TABLE_SHARE and the
graph (note that TL_WRITE_CONCURRENT_INSERT is not allowed, because an
INSERT into the main table means multiple UPDATEs in the graph)
* InnoDB uses secondary transaction-level caches linked in a list in
in thd->ha_data via a fake handlerton
* on rollback the secondary cache is discarded, on commit nodes
from the secondary cache are invalidated in the shared cache
while it is exclusively locked
* on savepoint rollback both caches are flushed. this can be improved
in the future with a row visibility callback
* graph size is controlled by @@mhnsw_cache_size, the cache is flushed
when it reaches the threshold
2024-07-17 17:16:28 +02:00
|
|
|
neighbors.num= 0;
|
Initial HNSW implementation
This commit includes the work done in collaboration with Hugo Wen from
Amazon:
MDEV-33408 Alter HNSW graph storage and fix memory leak
This commit changes the way HNSW graph information is stored in the
second table. Instead of storing connections as separate records, it now
stores neighbors for each node, leading to significant performance
improvements and storage savings.
Comparing with the previous approach, the insert speed is 5 times faster,
search speed improves by 23%, and storage usage is reduced by 73%, based
on ann-benchmark tests with random-xs-20-euclidean and
random-s-100-euclidean datasets.
Additionally, in previous code, vector objects were not released after
use, resulting in excessive memory consumption (over 20GB for building
the index with 90,000 records), preventing tests with large datasets.
Now ensure that vectors are released appropriately during the insert and
search functions. Note there are still some vectors that need to be
cleaned up after search query completion. Needs to be addressed in a
future commit.
All new code of the whole pull request, including one or several files
that are either new files or modified ones, are contributed under the
BSD-new license. I am contributing on behalf of my employer Amazon Web
Services, Inc.
As well as the commit:
Introduce session variables to manage HNSW index parameters
Three variables:
hnsw_max_connection_per_layer
hnsw_ef_constructor
hnsw_ef_search
ann-benchmark tool is also updated to support these variables in commit
https://github.com/HugoWenTD/ann-benchmarks/commit/e09784e for branch
https://github.com/HugoWenTD/ann-benchmarks/tree/mariadb-configurable
All new code of the whole pull request, including one or several files
that are either new files or modified ones, are contributed under the
BSD-new license. I am contributing on behalf of my employer Amazon Web
Services, Inc.
Co-authored-by: Hugo Wen <wenhug@amazon.com>
2024-02-17 17:03:30 +02:00
|
|
|
|
mhnsw: inter-statement shared cache
* preserve the graph in memory between statements
* keep it in a TABLE_SHARE, available for concurrent searches
* nodes are generally read-only, walking the graph doesn't change them
* distance to target is cached, calculated only once
* SIMD-optimized bloom filter detects visited nodes
* nodes are stored in an array, not List, to better utilize bloom filter
* auto-adjusting heuristic to estimate the number of visited nodes
(to configure the bloom filter)
* many threads can concurrently walk the graph. MEM_ROOT and Hash_set
are protected with a mutex, but walking doesn't need them
* up to 8 threads can concurrently load nodes into the cache,
nodes are partitioned into 8 mutexes (8 is chosen arbitrarily, might
need tuning)
* concurrent editing is not supported though
* this is fine for MyISAM, TL_WRITE protects the TABLE_SHARE and the
graph (note that TL_WRITE_CONCURRENT_INSERT is not allowed, because an
INSERT into the main table means multiple UPDATEs in the graph)
* InnoDB uses secondary transaction-level caches linked in a list in
in thd->ha_data via a fake handlerton
* on rollback the secondary cache is discarded, on commit nodes
from the secondary cache are invalidated in the shared cache
while it is exclusively locked
* on savepoint rollback both caches are flushed. this can be improved
in the future with a row visibility callback
* graph size is controlled by @@mhnsw_cache_size, the cache is flushed
when it reaches the threshold
2024-07-17 17:16:28 +02:00
|
|
|
while (pq.elements() && neighbors.num < max_neighbor_connections)
|
Initial HNSW implementation
This commit includes the work done in collaboration with Hugo Wen from
Amazon:
MDEV-33408 Alter HNSW graph storage and fix memory leak
This commit changes the way HNSW graph information is stored in the
second table. Instead of storing connections as separate records, it now
stores neighbors for each node, leading to significant performance
improvements and storage savings.
Comparing with the previous approach, the insert speed is 5 times faster,
search speed improves by 23%, and storage usage is reduced by 73%, based
on ann-benchmark tests with random-xs-20-euclidean and
random-s-100-euclidean datasets.
Additionally, in previous code, vector objects were not released after
use, resulting in excessive memory consumption (over 20GB for building
the index with 90,000 records), preventing tests with large datasets.
Now ensure that vectors are released appropriately during the insert and
search functions. Note there are still some vectors that need to be
cleaned up after search query completion. Needs to be addressed in a
future commit.
All new code of the whole pull request, including one or several files
that are either new files or modified ones, are contributed under the
BSD-new license. I am contributing on behalf of my employer Amazon Web
Services, Inc.
As well as the commit:
Introduce session variables to manage HNSW index parameters
Three variables:
hnsw_max_connection_per_layer
hnsw_ef_constructor
hnsw_ef_search
ann-benchmark tool is also updated to support these variables in commit
https://github.com/HugoWenTD/ann-benchmarks/commit/e09784e for branch
https://github.com/HugoWenTD/ann-benchmarks/tree/mariadb-configurable
All new code of the whole pull request, including one or several files
that are either new files or modified ones, are contributed under the
BSD-new license. I am contributing on behalf of my employer Amazon Web
Services, Inc.
Co-authored-by: Hugo Wen <wenhug@amazon.com>
2024-02-17 17:03:30 +02:00
|
|
|
{
|
mhnsw: inter-statement shared cache
* preserve the graph in memory between statements
* keep it in a TABLE_SHARE, available for concurrent searches
* nodes are generally read-only, walking the graph doesn't change them
* distance to target is cached, calculated only once
* SIMD-optimized bloom filter detects visited nodes
* nodes are stored in an array, not List, to better utilize bloom filter
* auto-adjusting heuristic to estimate the number of visited nodes
(to configure the bloom filter)
* many threads can concurrently walk the graph. MEM_ROOT and Hash_set
are protected with a mutex, but walking doesn't need them
* up to 8 threads can concurrently load nodes into the cache,
nodes are partitioned into 8 mutexes (8 is chosen arbitrarily, might
need tuning)
* concurrent editing is not supported though
* this is fine for MyISAM, TL_WRITE protects the TABLE_SHARE and the
graph (note that TL_WRITE_CONCURRENT_INSERT is not allowed, because an
INSERT into the main table means multiple UPDATEs in the graph)
* InnoDB uses secondary transaction-level caches linked in a list in
in thd->ha_data via a fake handlerton
* on rollback the secondary cache is discarded, on commit nodes
from the secondary cache are invalidated in the shared cache
while it is exclusively locked
* on savepoint rollback both caches are flushed. this can be improved
in the future with a row visibility callback
* graph size is controlled by @@mhnsw_cache_size, the cache is flushed
when it reaches the threshold
2024-07-17 17:16:28 +02:00
|
|
|
Visited *vec= pq.pop();
|
|
|
|
FVectorNode * const node= vec->node;
|
|
|
|
const float target_dista= vec->distance_to_target / alpha;
|
2024-06-05 17:55:24 +02:00
|
|
|
bool discard= false;
|
mhnsw: inter-statement shared cache
* preserve the graph in memory between statements
* keep it in a TABLE_SHARE, available for concurrent searches
* nodes are generally read-only, walking the graph doesn't change them
* distance to target is cached, calculated only once
* SIMD-optimized bloom filter detects visited nodes
* nodes are stored in an array, not List, to better utilize bloom filter
* auto-adjusting heuristic to estimate the number of visited nodes
(to configure the bloom filter)
* many threads can concurrently walk the graph. MEM_ROOT and Hash_set
are protected with a mutex, but walking doesn't need them
* up to 8 threads can concurrently load nodes into the cache,
nodes are partitioned into 8 mutexes (8 is chosen arbitrarily, might
need tuning)
* concurrent editing is not supported though
* this is fine for MyISAM, TL_WRITE protects the TABLE_SHARE and the
graph (note that TL_WRITE_CONCURRENT_INSERT is not allowed, because an
INSERT into the main table means multiple UPDATEs in the graph)
* InnoDB uses secondary transaction-level caches linked in a list in
in thd->ha_data via a fake handlerton
* on rollback the secondary cache is discarded, on commit nodes
from the secondary cache are invalidated in the shared cache
while it is exclusively locked
* on savepoint rollback both caches are flushed. this can be improved
in the future with a row visibility callback
* graph size is controlled by @@mhnsw_cache_size, the cache is flushed
when it reaches the threshold
2024-07-17 17:16:28 +02:00
|
|
|
for (size_t i=0; i < neighbors.num; i++)
|
2024-07-19 12:25:25 +02:00
|
|
|
if ((discard= node->distance_to(neighbors.links[i]->vec) < target_dista))
|
2024-06-05 17:55:24 +02:00
|
|
|
break;
|
2024-06-07 19:12:08 +02:00
|
|
|
if (!discard)
|
mhnsw: inter-statement shared cache
* preserve the graph in memory between statements
* keep it in a TABLE_SHARE, available for concurrent searches
* nodes are generally read-only, walking the graph doesn't change them
* distance to target is cached, calculated only once
* SIMD-optimized bloom filter detects visited nodes
* nodes are stored in an array, not List, to better utilize bloom filter
* auto-adjusting heuristic to estimate the number of visited nodes
(to configure the bloom filter)
* many threads can concurrently walk the graph. MEM_ROOT and Hash_set
are protected with a mutex, but walking doesn't need them
* up to 8 threads can concurrently load nodes into the cache,
nodes are partitioned into 8 mutexes (8 is chosen arbitrarily, might
need tuning)
* concurrent editing is not supported though
* this is fine for MyISAM, TL_WRITE protects the TABLE_SHARE and the
graph (note that TL_WRITE_CONCURRENT_INSERT is not allowed, because an
INSERT into the main table means multiple UPDATEs in the graph)
* InnoDB uses secondary transaction-level caches linked in a list in
in thd->ha_data via a fake handlerton
* on rollback the secondary cache is discarded, on commit nodes
from the secondary cache are invalidated in the shared cache
while it is exclusively locked
* on savepoint rollback both caches are flushed. this can be improved
in the future with a row visibility callback
* graph size is controlled by @@mhnsw_cache_size, the cache is flushed
when it reaches the threshold
2024-07-17 17:16:28 +02:00
|
|
|
target.push_neighbor(layer, node);
|
|
|
|
else if (discarded_num + neighbors.num < max_neighbor_connections)
|
|
|
|
discarded[discarded_num++]= vec;
|
Initial HNSW implementation
This commit includes the work done in collaboration with Hugo Wen from
Amazon:
MDEV-33408 Alter HNSW graph storage and fix memory leak
This commit changes the way HNSW graph information is stored in the
second table. Instead of storing connections as separate records, it now
stores neighbors for each node, leading to significant performance
improvements and storage savings.
Comparing with the previous approach, the insert speed is 5 times faster,
search speed improves by 23%, and storage usage is reduced by 73%, based
on ann-benchmark tests with random-xs-20-euclidean and
random-s-100-euclidean datasets.
Additionally, in previous code, vector objects were not released after
use, resulting in excessive memory consumption (over 20GB for building
the index with 90,000 records), preventing tests with large datasets.
Now ensure that vectors are released appropriately during the insert and
search functions. Note there are still some vectors that need to be
cleaned up after search query completion. Needs to be addressed in a
future commit.
All new code of the whole pull request, including one or several files
that are either new files or modified ones, are contributed under the
BSD-new license. I am contributing on behalf of my employer Amazon Web
Services, Inc.
As well as the commit:
Introduce session variables to manage HNSW index parameters
Three variables:
hnsw_max_connection_per_layer
hnsw_ef_constructor
hnsw_ef_search
ann-benchmark tool is also updated to support these variables in commit
https://github.com/HugoWenTD/ann-benchmarks/commit/e09784e for branch
https://github.com/HugoWenTD/ann-benchmarks/tree/mariadb-configurable
All new code of the whole pull request, including one or several files
that are either new files or modified ones, are contributed under the
BSD-new license. I am contributing on behalf of my employer Amazon Web
Services, Inc.
Co-authored-by: Hugo Wen <wenhug@amazon.com>
2024-02-17 17:03:30 +02:00
|
|
|
}
|
|
|
|
|
mhnsw: inter-statement shared cache
* preserve the graph in memory between statements
* keep it in a TABLE_SHARE, available for concurrent searches
* nodes are generally read-only, walking the graph doesn't change them
* distance to target is cached, calculated only once
* SIMD-optimized bloom filter detects visited nodes
* nodes are stored in an array, not List, to better utilize bloom filter
* auto-adjusting heuristic to estimate the number of visited nodes
(to configure the bloom filter)
* many threads can concurrently walk the graph. MEM_ROOT and Hash_set
are protected with a mutex, but walking doesn't need them
* up to 8 threads can concurrently load nodes into the cache,
nodes are partitioned into 8 mutexes (8 is chosen arbitrarily, might
need tuning)
* concurrent editing is not supported though
* this is fine for MyISAM, TL_WRITE protects the TABLE_SHARE and the
graph (note that TL_WRITE_CONCURRENT_INSERT is not allowed, because an
INSERT into the main table means multiple UPDATEs in the graph)
* InnoDB uses secondary transaction-level caches linked in a list in
in thd->ha_data via a fake handlerton
* on rollback the secondary cache is discarded, on commit nodes
from the secondary cache are invalidated in the shared cache
while it is exclusively locked
* on savepoint rollback both caches are flushed. this can be improved
in the future with a row visibility callback
* graph size is controlled by @@mhnsw_cache_size, the cache is flushed
when it reaches the threshold
2024-07-17 17:16:28 +02:00
|
|
|
for (size_t i=0; i < discarded_num && neighbors.num < max_neighbor_connections; i++)
|
|
|
|
target.push_neighbor(layer, discarded[i]->node);
|
Initial HNSW implementation
This commit includes the work done in collaboration with Hugo Wen from
Amazon:
MDEV-33408 Alter HNSW graph storage and fix memory leak
This commit changes the way HNSW graph information is stored in the
second table. Instead of storing connections as separate records, it now
stores neighbors for each node, leading to significant performance
improvements and storage savings.
Comparing with the previous approach, the insert speed is 5 times faster,
search speed improves by 23%, and storage usage is reduced by 73%, based
on ann-benchmark tests with random-xs-20-euclidean and
random-s-100-euclidean datasets.
Additionally, in previous code, vector objects were not released after
use, resulting in excessive memory consumption (over 20GB for building
the index with 90,000 records), preventing tests with large datasets.
Now ensure that vectors are released appropriately during the insert and
search functions. Note there are still some vectors that need to be
cleaned up after search query completion. Needs to be addressed in a
future commit.
All new code of the whole pull request, including one or several files
that are either new files or modified ones, are contributed under the
BSD-new license. I am contributing on behalf of my employer Amazon Web
Services, Inc.
As well as the commit:
Introduce session variables to manage HNSW index parameters
Three variables:
hnsw_max_connection_per_layer
hnsw_ef_constructor
hnsw_ef_search
ann-benchmark tool is also updated to support these variables in commit
https://github.com/HugoWenTD/ann-benchmarks/commit/e09784e for branch
https://github.com/HugoWenTD/ann-benchmarks/tree/mariadb-configurable
All new code of the whole pull request, including one or several files
that are either new files or modified ones, are contributed under the
BSD-new license. I am contributing on behalf of my employer Amazon Web
Services, Inc.
Co-authored-by: Hugo Wen <wenhug@amazon.com>
2024-02-17 17:03:30 +02:00
|
|
|
|
mhnsw: inter-statement shared cache
* preserve the graph in memory between statements
* keep it in a TABLE_SHARE, available for concurrent searches
* nodes are generally read-only, walking the graph doesn't change them
* distance to target is cached, calculated only once
* SIMD-optimized bloom filter detects visited nodes
* nodes are stored in an array, not List, to better utilize bloom filter
* auto-adjusting heuristic to estimate the number of visited nodes
(to configure the bloom filter)
* many threads can concurrently walk the graph. MEM_ROOT and Hash_set
are protected with a mutex, but walking doesn't need them
* up to 8 threads can concurrently load nodes into the cache,
nodes are partitioned into 8 mutexes (8 is chosen arbitrarily, might
need tuning)
* concurrent editing is not supported though
* this is fine for MyISAM, TL_WRITE protects the TABLE_SHARE and the
graph (note that TL_WRITE_CONCURRENT_INSERT is not allowed, because an
INSERT into the main table means multiple UPDATEs in the graph)
* InnoDB uses secondary transaction-level caches linked in a list in
in thd->ha_data via a fake handlerton
* on rollback the secondary cache is discarded, on commit nodes
from the secondary cache are invalidated in the shared cache
while it is exclusively locked
* on savepoint rollback both caches are flushed. this can be improved
in the future with a row visibility callback
* graph size is controlled by @@mhnsw_cache_size, the cache is flushed
when it reaches the threshold
2024-07-17 17:16:28 +02:00
|
|
|
my_safe_afree(discarded, sizeof(Visited**)*max_neighbor_connections);
|
2024-06-01 00:17:05 +02:00
|
|
|
return 0;
|
Initial HNSW implementation
This commit includes the work done in collaboration with Hugo Wen from
Amazon:
MDEV-33408 Alter HNSW graph storage and fix memory leak
This commit changes the way HNSW graph information is stored in the
second table. Instead of storing connections as separate records, it now
stores neighbors for each node, leading to significant performance
improvements and storage savings.
Comparing with the previous approach, the insert speed is 5 times faster,
search speed improves by 23%, and storage usage is reduced by 73%, based
on ann-benchmark tests with random-xs-20-euclidean and
random-s-100-euclidean datasets.
Additionally, in previous code, vector objects were not released after
use, resulting in excessive memory consumption (over 20GB for building
the index with 90,000 records), preventing tests with large datasets.
Now ensure that vectors are released appropriately during the insert and
search functions. Note there are still some vectors that need to be
cleaned up after search query completion. Needs to be addressed in a
future commit.
All new code of the whole pull request, including one or several files
that are either new files or modified ones, are contributed under the
BSD-new license. I am contributing on behalf of my employer Amazon Web
Services, Inc.
As well as the commit:
Introduce session variables to manage HNSW index parameters
Three variables:
hnsw_max_connection_per_layer
hnsw_ef_constructor
hnsw_ef_search
ann-benchmark tool is also updated to support these variables in commit
https://github.com/HugoWenTD/ann-benchmarks/commit/e09784e for branch
https://github.com/HugoWenTD/ann-benchmarks/tree/mariadb-configurable
All new code of the whole pull request, including one or several files
that are either new files or modified ones, are contributed under the
BSD-new license. I am contributing on behalf of my employer Amazon Web
Services, Inc.
Co-authored-by: Hugo Wen <wenhug@amazon.com>
2024-02-17 17:03:30 +02:00
|
|
|
}
|
|
|
|
|
2024-06-01 00:17:05 +02:00
|
|
|
|
mhnsw: inter-statement shared cache
* preserve the graph in memory between statements
* keep it in a TABLE_SHARE, available for concurrent searches
* nodes are generally read-only, walking the graph doesn't change them
* distance to target is cached, calculated only once
* SIMD-optimized bloom filter detects visited nodes
* nodes are stored in an array, not List, to better utilize bloom filter
* auto-adjusting heuristic to estimate the number of visited nodes
(to configure the bloom filter)
* many threads can concurrently walk the graph. MEM_ROOT and Hash_set
are protected with a mutex, but walking doesn't need them
* up to 8 threads can concurrently load nodes into the cache,
nodes are partitioned into 8 mutexes (8 is chosen arbitrarily, might
need tuning)
* concurrent editing is not supported though
* this is fine for MyISAM, TL_WRITE protects the TABLE_SHARE and the
graph (note that TL_WRITE_CONCURRENT_INSERT is not allowed, because an
INSERT into the main table means multiple UPDATEs in the graph)
* InnoDB uses secondary transaction-level caches linked in a list in
in thd->ha_data via a fake handlerton
* on rollback the secondary cache is discarded, on commit nodes
from the secondary cache are invalidated in the shared cache
while it is exclusively locked
* on savepoint rollback both caches are flushed. this can be improved
in the future with a row visibility callback
* graph size is controlled by @@mhnsw_cache_size, the cache is flushed
when it reaches the threshold
2024-07-17 17:16:28 +02:00
|
|
|
int FVectorNode::save(TABLE *graph)
|
Initial HNSW implementation
This commit includes the work done in collaboration with Hugo Wen from
Amazon:
MDEV-33408 Alter HNSW graph storage and fix memory leak
This commit changes the way HNSW graph information is stored in the
second table. Instead of storing connections as separate records, it now
stores neighbors for each node, leading to significant performance
improvements and storage savings.
Comparing with the previous approach, the insert speed is 5 times faster,
search speed improves by 23%, and storage usage is reduced by 73%, based
on ann-benchmark tests with random-xs-20-euclidean and
random-s-100-euclidean datasets.
Additionally, in previous code, vector objects were not released after
use, resulting in excessive memory consumption (over 20GB for building
the index with 90,000 records), preventing tests with large datasets.
Now ensure that vectors are released appropriately during the insert and
search functions. Note there are still some vectors that need to be
cleaned up after search query completion. Needs to be addressed in a
future commit.
All new code of the whole pull request, including one or several files
that are either new files or modified ones, are contributed under the
BSD-new license. I am contributing on behalf of my employer Amazon Web
Services, Inc.
As well as the commit:
Introduce session variables to manage HNSW index parameters
Three variables:
hnsw_max_connection_per_layer
hnsw_ef_constructor
hnsw_ef_search
ann-benchmark tool is also updated to support these variables in commit
https://github.com/HugoWenTD/ann-benchmarks/commit/e09784e for branch
https://github.com/HugoWenTD/ann-benchmarks/tree/mariadb-configurable
All new code of the whole pull request, including one or several files
that are either new files or modified ones, are contributed under the
BSD-new license. I am contributing on behalf of my employer Amazon Web
Services, Inc.
Co-authored-by: Hugo Wen <wenhug@amazon.com>
2024-02-17 17:03:30 +02:00
|
|
|
{
|
2024-06-13 23:24:51 +02:00
|
|
|
DBUG_ASSERT(vec);
|
|
|
|
DBUG_ASSERT(neighbors);
|
2024-06-12 17:12:20 +02:00
|
|
|
|
2024-06-13 23:24:51 +02:00
|
|
|
restore_record(graph, s->default_values);
|
|
|
|
graph->field[FIELD_LAYER]->store(max_layer, false);
|
2024-07-16 15:15:17 +02:00
|
|
|
if (deleted)
|
|
|
|
graph->field[FIELD_TREF]->set_null();
|
|
|
|
else
|
|
|
|
{
|
|
|
|
graph->field[FIELD_TREF]->set_notnull();
|
|
|
|
graph->field[FIELD_TREF]->store_binary(tref(), tref_len());
|
|
|
|
}
|
2024-07-19 12:25:25 +02:00
|
|
|
graph->field[FIELD_VEC]->store_binary(vec->data(), FVector::data_size(ctx->vec_len));
|
Initial HNSW implementation
This commit includes the work done in collaboration with Hugo Wen from
Amazon:
MDEV-33408 Alter HNSW graph storage and fix memory leak
This commit changes the way HNSW graph information is stored in the
second table. Instead of storing connections as separate records, it now
stores neighbors for each node, leading to significant performance
improvements and storage savings.
Comparing with the previous approach, the insert speed is 5 times faster,
search speed improves by 23%, and storage usage is reduced by 73%, based
on ann-benchmark tests with random-xs-20-euclidean and
random-s-100-euclidean datasets.
Additionally, in previous code, vector objects were not released after
use, resulting in excessive memory consumption (over 20GB for building
the index with 90,000 records), preventing tests with large datasets.
Now ensure that vectors are released appropriately during the insert and
search functions. Note there are still some vectors that need to be
cleaned up after search query completion. Needs to be addressed in a
future commit.
All new code of the whole pull request, including one or several files
that are either new files or modified ones, are contributed under the
BSD-new license. I am contributing on behalf of my employer Amazon Web
Services, Inc.
As well as the commit:
Introduce session variables to manage HNSW index parameters
Three variables:
hnsw_max_connection_per_layer
hnsw_ef_constructor
hnsw_ef_search
ann-benchmark tool is also updated to support these variables in commit
https://github.com/HugoWenTD/ann-benchmarks/commit/e09784e for branch
https://github.com/HugoWenTD/ann-benchmarks/tree/mariadb-configurable
All new code of the whole pull request, including one or several files
that are either new files or modified ones, are contributed under the
BSD-new license. I am contributing on behalf of my employer Amazon Web
Services, Inc.
Co-authored-by: Hugo Wen <wenhug@amazon.com>
2024-02-17 17:03:30 +02:00
|
|
|
|
2024-06-13 23:24:51 +02:00
|
|
|
size_t total_size= 0;
|
|
|
|
for (size_t i=0; i <= max_layer; i++)
|
mhnsw: inter-statement shared cache
* preserve the graph in memory between statements
* keep it in a TABLE_SHARE, available for concurrent searches
* nodes are generally read-only, walking the graph doesn't change them
* distance to target is cached, calculated only once
* SIMD-optimized bloom filter detects visited nodes
* nodes are stored in an array, not List, to better utilize bloom filter
* auto-adjusting heuristic to estimate the number of visited nodes
(to configure the bloom filter)
* many threads can concurrently walk the graph. MEM_ROOT and Hash_set
are protected with a mutex, but walking doesn't need them
* up to 8 threads can concurrently load nodes into the cache,
nodes are partitioned into 8 mutexes (8 is chosen arbitrarily, might
need tuning)
* concurrent editing is not supported though
* this is fine for MyISAM, TL_WRITE protects the TABLE_SHARE and the
graph (note that TL_WRITE_CONCURRENT_INSERT is not allowed, because an
INSERT into the main table means multiple UPDATEs in the graph)
* InnoDB uses secondary transaction-level caches linked in a list in
in thd->ha_data via a fake handlerton
* on rollback the secondary cache is discarded, on commit nodes
from the secondary cache are invalidated in the shared cache
while it is exclusively locked
* on savepoint rollback both caches are flushed. this can be improved
in the future with a row visibility callback
* graph size is controlled by @@mhnsw_cache_size, the cache is flushed
when it reaches the threshold
2024-07-17 17:16:28 +02:00
|
|
|
total_size+= 1 + gref_len() * neighbors[i].num;
|
Initial HNSW implementation
This commit includes the work done in collaboration with Hugo Wen from
Amazon:
MDEV-33408 Alter HNSW graph storage and fix memory leak
This commit changes the way HNSW graph information is stored in the
second table. Instead of storing connections as separate records, it now
stores neighbors for each node, leading to significant performance
improvements and storage savings.
Comparing with the previous approach, the insert speed is 5 times faster,
search speed improves by 23%, and storage usage is reduced by 73%, based
on ann-benchmark tests with random-xs-20-euclidean and
random-s-100-euclidean datasets.
Additionally, in previous code, vector objects were not released after
use, resulting in excessive memory consumption (over 20GB for building
the index with 90,000 records), preventing tests with large datasets.
Now ensure that vectors are released appropriately during the insert and
search functions. Note there are still some vectors that need to be
cleaned up after search query completion. Needs to be addressed in a
future commit.
All new code of the whole pull request, including one or several files
that are either new files or modified ones, are contributed under the
BSD-new license. I am contributing on behalf of my employer Amazon Web
Services, Inc.
As well as the commit:
Introduce session variables to manage HNSW index parameters
Three variables:
hnsw_max_connection_per_layer
hnsw_ef_constructor
hnsw_ef_search
ann-benchmark tool is also updated to support these variables in commit
https://github.com/HugoWenTD/ann-benchmarks/commit/e09784e for branch
https://github.com/HugoWenTD/ann-benchmarks/tree/mariadb-configurable
All new code of the whole pull request, including one or several files
that are either new files or modified ones, are contributed under the
BSD-new license. I am contributing on behalf of my employer Amazon Web
Services, Inc.
Co-authored-by: Hugo Wen <wenhug@amazon.com>
2024-02-17 17:03:30 +02:00
|
|
|
|
2024-06-13 23:24:51 +02:00
|
|
|
uchar *neighbor_blob= static_cast<uchar *>(my_safe_alloca(total_size));
|
|
|
|
uchar *ptr= neighbor_blob;
|
|
|
|
for (size_t i= 0; i <= max_layer; i++)
|
Initial HNSW implementation
This commit includes the work done in collaboration with Hugo Wen from
Amazon:
MDEV-33408 Alter HNSW graph storage and fix memory leak
This commit changes the way HNSW graph information is stored in the
second table. Instead of storing connections as separate records, it now
stores neighbors for each node, leading to significant performance
improvements and storage savings.
Comparing with the previous approach, the insert speed is 5 times faster,
search speed improves by 23%, and storage usage is reduced by 73%, based
on ann-benchmark tests with random-xs-20-euclidean and
random-s-100-euclidean datasets.
Additionally, in previous code, vector objects were not released after
use, resulting in excessive memory consumption (over 20GB for building
the index with 90,000 records), preventing tests with large datasets.
Now ensure that vectors are released appropriately during the insert and
search functions. Note there are still some vectors that need to be
cleaned up after search query completion. Needs to be addressed in a
future commit.
All new code of the whole pull request, including one or several files
that are either new files or modified ones, are contributed under the
BSD-new license. I am contributing on behalf of my employer Amazon Web
Services, Inc.
As well as the commit:
Introduce session variables to manage HNSW index parameters
Three variables:
hnsw_max_connection_per_layer
hnsw_ef_constructor
hnsw_ef_search
ann-benchmark tool is also updated to support these variables in commit
https://github.com/HugoWenTD/ann-benchmarks/commit/e09784e for branch
https://github.com/HugoWenTD/ann-benchmarks/tree/mariadb-configurable
All new code of the whole pull request, including one or several files
that are either new files or modified ones, are contributed under the
BSD-new license. I am contributing on behalf of my employer Amazon Web
Services, Inc.
Co-authored-by: Hugo Wen <wenhug@amazon.com>
2024-02-17 17:03:30 +02:00
|
|
|
{
|
mhnsw: inter-statement shared cache
* preserve the graph in memory between statements
* keep it in a TABLE_SHARE, available for concurrent searches
* nodes are generally read-only, walking the graph doesn't change them
* distance to target is cached, calculated only once
* SIMD-optimized bloom filter detects visited nodes
* nodes are stored in an array, not List, to better utilize bloom filter
* auto-adjusting heuristic to estimate the number of visited nodes
(to configure the bloom filter)
* many threads can concurrently walk the graph. MEM_ROOT and Hash_set
are protected with a mutex, but walking doesn't need them
* up to 8 threads can concurrently load nodes into the cache,
nodes are partitioned into 8 mutexes (8 is chosen arbitrarily, might
need tuning)
* concurrent editing is not supported though
* this is fine for MyISAM, TL_WRITE protects the TABLE_SHARE and the
graph (note that TL_WRITE_CONCURRENT_INSERT is not allowed, because an
INSERT into the main table means multiple UPDATEs in the graph)
* InnoDB uses secondary transaction-level caches linked in a list in
in thd->ha_data via a fake handlerton
* on rollback the secondary cache is discarded, on commit nodes
from the secondary cache are invalidated in the shared cache
while it is exclusively locked
* on savepoint rollback both caches are flushed. this can be improved
in the future with a row visibility callback
* graph size is controlled by @@mhnsw_cache_size, the cache is flushed
when it reaches the threshold
2024-07-17 17:16:28 +02:00
|
|
|
*ptr++= (uchar)(neighbors[i].num);
|
|
|
|
for (size_t j= 0; j < neighbors[i].num; j++, ptr+= gref_len())
|
|
|
|
memcpy(ptr, neighbors[i].links[j]->gref(), gref_len());
|
Initial HNSW implementation
This commit includes the work done in collaboration with Hugo Wen from
Amazon:
MDEV-33408 Alter HNSW graph storage and fix memory leak
This commit changes the way HNSW graph information is stored in the
second table. Instead of storing connections as separate records, it now
stores neighbors for each node, leading to significant performance
improvements and storage savings.
Comparing with the previous approach, the insert speed is 5 times faster,
search speed improves by 23%, and storage usage is reduced by 73%, based
on ann-benchmark tests with random-xs-20-euclidean and
random-s-100-euclidean datasets.
Additionally, in previous code, vector objects were not released after
use, resulting in excessive memory consumption (over 20GB for building
the index with 90,000 records), preventing tests with large datasets.
Now ensure that vectors are released appropriately during the insert and
search functions. Note there are still some vectors that need to be
cleaned up after search query completion. Needs to be addressed in a
future commit.
All new code of the whole pull request, including one or several files
that are either new files or modified ones, are contributed under the
BSD-new license. I am contributing on behalf of my employer Amazon Web
Services, Inc.
As well as the commit:
Introduce session variables to manage HNSW index parameters
Three variables:
hnsw_max_connection_per_layer
hnsw_ef_constructor
hnsw_ef_search
ann-benchmark tool is also updated to support these variables in commit
https://github.com/HugoWenTD/ann-benchmarks/commit/e09784e for branch
https://github.com/HugoWenTD/ann-benchmarks/tree/mariadb-configurable
All new code of the whole pull request, including one or several files
that are either new files or modified ones, are contributed under the
BSD-new license. I am contributing on behalf of my employer Amazon Web
Services, Inc.
Co-authored-by: Hugo Wen <wenhug@amazon.com>
2024-02-17 17:03:30 +02:00
|
|
|
}
|
2024-06-13 23:24:51 +02:00
|
|
|
graph->field[FIELD_NEIGHBORS]->store_binary(neighbor_blob, total_size);
|
Initial HNSW implementation
This commit includes the work done in collaboration with Hugo Wen from
Amazon:
MDEV-33408 Alter HNSW graph storage and fix memory leak
This commit changes the way HNSW graph information is stored in the
second table. Instead of storing connections as separate records, it now
stores neighbors for each node, leading to significant performance
improvements and storage savings.
Comparing with the previous approach, the insert speed is 5 times faster,
search speed improves by 23%, and storage usage is reduced by 73%, based
on ann-benchmark tests with random-xs-20-euclidean and
random-s-100-euclidean datasets.
Additionally, in previous code, vector objects were not released after
use, resulting in excessive memory consumption (over 20GB for building
the index with 90,000 records), preventing tests with large datasets.
Now ensure that vectors are released appropriately during the insert and
search functions. Note there are still some vectors that need to be
cleaned up after search query completion. Needs to be addressed in a
future commit.
All new code of the whole pull request, including one or several files
that are either new files or modified ones, are contributed under the
BSD-new license. I am contributing on behalf of my employer Amazon Web
Services, Inc.
As well as the commit:
Introduce session variables to manage HNSW index parameters
Three variables:
hnsw_max_connection_per_layer
hnsw_ef_constructor
hnsw_ef_search
ann-benchmark tool is also updated to support these variables in commit
https://github.com/HugoWenTD/ann-benchmarks/commit/e09784e for branch
https://github.com/HugoWenTD/ann-benchmarks/tree/mariadb-configurable
All new code of the whole pull request, including one or several files
that are either new files or modified ones, are contributed under the
BSD-new license. I am contributing on behalf of my employer Amazon Web
Services, Inc.
Co-authored-by: Hugo Wen <wenhug@amazon.com>
2024-02-17 17:03:30 +02:00
|
|
|
|
mhnsw: inter-statement shared cache
* preserve the graph in memory between statements
* keep it in a TABLE_SHARE, available for concurrent searches
* nodes are generally read-only, walking the graph doesn't change them
* distance to target is cached, calculated only once
* SIMD-optimized bloom filter detects visited nodes
* nodes are stored in an array, not List, to better utilize bloom filter
* auto-adjusting heuristic to estimate the number of visited nodes
(to configure the bloom filter)
* many threads can concurrently walk the graph. MEM_ROOT and Hash_set
are protected with a mutex, but walking doesn't need them
* up to 8 threads can concurrently load nodes into the cache,
nodes are partitioned into 8 mutexes (8 is chosen arbitrarily, might
need tuning)
* concurrent editing is not supported though
* this is fine for MyISAM, TL_WRITE protects the TABLE_SHARE and the
graph (note that TL_WRITE_CONCURRENT_INSERT is not allowed, because an
INSERT into the main table means multiple UPDATEs in the graph)
* InnoDB uses secondary transaction-level caches linked in a list in
in thd->ha_data via a fake handlerton
* on rollback the secondary cache is discarded, on commit nodes
from the secondary cache are invalidated in the shared cache
while it is exclusively locked
* on savepoint rollback both caches are flushed. this can be improved
in the future with a row visibility callback
* graph size is controlled by @@mhnsw_cache_size, the cache is flushed
when it reaches the threshold
2024-07-17 17:16:28 +02:00
|
|
|
int err;
|
|
|
|
if (stored)
|
Initial HNSW implementation
This commit includes the work done in collaboration with Hugo Wen from
Amazon:
MDEV-33408 Alter HNSW graph storage and fix memory leak
This commit changes the way HNSW graph information is stored in the
second table. Instead of storing connections as separate records, it now
stores neighbors for each node, leading to significant performance
improvements and storage savings.
Comparing with the previous approach, the insert speed is 5 times faster,
search speed improves by 23%, and storage usage is reduced by 73%, based
on ann-benchmark tests with random-xs-20-euclidean and
random-s-100-euclidean datasets.
Additionally, in previous code, vector objects were not released after
use, resulting in excessive memory consumption (over 20GB for building
the index with 90,000 records), preventing tests with large datasets.
Now ensure that vectors are released appropriately during the insert and
search functions. Note there are still some vectors that need to be
cleaned up after search query completion. Needs to be addressed in a
future commit.
All new code of the whole pull request, including one or several files
that are either new files or modified ones, are contributed under the
BSD-new license. I am contributing on behalf of my employer Amazon Web
Services, Inc.
As well as the commit:
Introduce session variables to manage HNSW index parameters
Three variables:
hnsw_max_connection_per_layer
hnsw_ef_constructor
hnsw_ef_search
ann-benchmark tool is also updated to support these variables in commit
https://github.com/HugoWenTD/ann-benchmarks/commit/e09784e for branch
https://github.com/HugoWenTD/ann-benchmarks/tree/mariadb-configurable
All new code of the whole pull request, including one or several files
that are either new files or modified ones, are contributed under the
BSD-new license. I am contributing on behalf of my employer Amazon Web
Services, Inc.
Co-authored-by: Hugo Wen <wenhug@amazon.com>
2024-02-17 17:03:30 +02:00
|
|
|
{
|
mhnsw: inter-statement shared cache
* preserve the graph in memory between statements
* keep it in a TABLE_SHARE, available for concurrent searches
* nodes are generally read-only, walking the graph doesn't change them
* distance to target is cached, calculated only once
* SIMD-optimized bloom filter detects visited nodes
* nodes are stored in an array, not List, to better utilize bloom filter
* auto-adjusting heuristic to estimate the number of visited nodes
(to configure the bloom filter)
* many threads can concurrently walk the graph. MEM_ROOT and Hash_set
are protected with a mutex, but walking doesn't need them
* up to 8 threads can concurrently load nodes into the cache,
nodes are partitioned into 8 mutexes (8 is chosen arbitrarily, might
need tuning)
* concurrent editing is not supported though
* this is fine for MyISAM, TL_WRITE protects the TABLE_SHARE and the
graph (note that TL_WRITE_CONCURRENT_INSERT is not allowed, because an
INSERT into the main table means multiple UPDATEs in the graph)
* InnoDB uses secondary transaction-level caches linked in a list in
in thd->ha_data via a fake handlerton
* on rollback the secondary cache is discarded, on commit nodes
from the secondary cache are invalidated in the shared cache
while it is exclusively locked
* on savepoint rollback both caches are flushed. this can be improved
in the future with a row visibility callback
* graph size is controlled by @@mhnsw_cache_size, the cache is flushed
when it reaches the threshold
2024-07-17 17:16:28 +02:00
|
|
|
if (!(err= graph->file->ha_rnd_pos(graph->record[1], gref())))
|
2024-06-08 11:03:08 +02:00
|
|
|
{
|
mhnsw: inter-statement shared cache
* preserve the graph in memory between statements
* keep it in a TABLE_SHARE, available for concurrent searches
* nodes are generally read-only, walking the graph doesn't change them
* distance to target is cached, calculated only once
* SIMD-optimized bloom filter detects visited nodes
* nodes are stored in an array, not List, to better utilize bloom filter
* auto-adjusting heuristic to estimate the number of visited nodes
(to configure the bloom filter)
* many threads can concurrently walk the graph. MEM_ROOT and Hash_set
are protected with a mutex, but walking doesn't need them
* up to 8 threads can concurrently load nodes into the cache,
nodes are partitioned into 8 mutexes (8 is chosen arbitrarily, might
need tuning)
* concurrent editing is not supported though
* this is fine for MyISAM, TL_WRITE protects the TABLE_SHARE and the
graph (note that TL_WRITE_CONCURRENT_INSERT is not allowed, because an
INSERT into the main table means multiple UPDATEs in the graph)
* InnoDB uses secondary transaction-level caches linked in a list in
in thd->ha_data via a fake handlerton
* on rollback the secondary cache is discarded, on commit nodes
from the secondary cache are invalidated in the shared cache
while it is exclusively locked
* on savepoint rollback both caches are flushed. this can be improved
in the future with a row visibility callback
* graph size is controlled by @@mhnsw_cache_size, the cache is flushed
when it reaches the threshold
2024-07-17 17:16:28 +02:00
|
|
|
err= graph->file->ha_update_row(graph->record[1], graph->record[0]);
|
|
|
|
if (err == HA_ERR_RECORD_IS_THE_SAME)
|
|
|
|
err= 0;
|
2024-06-08 11:03:08 +02:00
|
|
|
}
|
Initial HNSW implementation
This commit includes the work done in collaboration with Hugo Wen from
Amazon:
MDEV-33408 Alter HNSW graph storage and fix memory leak
This commit changes the way HNSW graph information is stored in the
second table. Instead of storing connections as separate records, it now
stores neighbors for each node, leading to significant performance
improvements and storage savings.
Comparing with the previous approach, the insert speed is 5 times faster,
search speed improves by 23%, and storage usage is reduced by 73%, based
on ann-benchmark tests with random-xs-20-euclidean and
random-s-100-euclidean datasets.
Additionally, in previous code, vector objects were not released after
use, resulting in excessive memory consumption (over 20GB for building
the index with 90,000 records), preventing tests with large datasets.
Now ensure that vectors are released appropriately during the insert and
search functions. Note there are still some vectors that need to be
cleaned up after search query completion. Needs to be addressed in a
future commit.
All new code of the whole pull request, including one or several files
that are either new files or modified ones, are contributed under the
BSD-new license. I am contributing on behalf of my employer Amazon Web
Services, Inc.
As well as the commit:
Introduce session variables to manage HNSW index parameters
Three variables:
hnsw_max_connection_per_layer
hnsw_ef_constructor
hnsw_ef_search
ann-benchmark tool is also updated to support these variables in commit
https://github.com/HugoWenTD/ann-benchmarks/commit/e09784e for branch
https://github.com/HugoWenTD/ann-benchmarks/tree/mariadb-configurable
All new code of the whole pull request, including one or several files
that are either new files or modified ones, are contributed under the
BSD-new license. I am contributing on behalf of my employer Amazon Web
Services, Inc.
Co-authored-by: Hugo Wen <wenhug@amazon.com>
2024-02-17 17:03:30 +02:00
|
|
|
}
|
2024-06-13 23:24:51 +02:00
|
|
|
else
|
|
|
|
{
|
mhnsw: inter-statement shared cache
* preserve the graph in memory between statements
* keep it in a TABLE_SHARE, available for concurrent searches
* nodes are generally read-only, walking the graph doesn't change them
* distance to target is cached, calculated only once
* SIMD-optimized bloom filter detects visited nodes
* nodes are stored in an array, not List, to better utilize bloom filter
* auto-adjusting heuristic to estimate the number of visited nodes
(to configure the bloom filter)
* many threads can concurrently walk the graph. MEM_ROOT and Hash_set
are protected with a mutex, but walking doesn't need them
* up to 8 threads can concurrently load nodes into the cache,
nodes are partitioned into 8 mutexes (8 is chosen arbitrarily, might
need tuning)
* concurrent editing is not supported though
* this is fine for MyISAM, TL_WRITE protects the TABLE_SHARE and the
graph (note that TL_WRITE_CONCURRENT_INSERT is not allowed, because an
INSERT into the main table means multiple UPDATEs in the graph)
* InnoDB uses secondary transaction-level caches linked in a list in
in thd->ha_data via a fake handlerton
* on rollback the secondary cache is discarded, on commit nodes
from the secondary cache are invalidated in the shared cache
while it is exclusively locked
* on savepoint rollback both caches are flushed. this can be improved
in the future with a row visibility callback
* graph size is controlled by @@mhnsw_cache_size, the cache is flushed
when it reaches the threshold
2024-07-17 17:16:28 +02:00
|
|
|
err= graph->file->ha_write_row(graph->record[0]);
|
2024-06-13 23:24:51 +02:00
|
|
|
graph->file->position(graph->record[0]);
|
mhnsw: inter-statement shared cache
* preserve the graph in memory between statements
* keep it in a TABLE_SHARE, available for concurrent searches
* nodes are generally read-only, walking the graph doesn't change them
* distance to target is cached, calculated only once
* SIMD-optimized bloom filter detects visited nodes
* nodes are stored in an array, not List, to better utilize bloom filter
* auto-adjusting heuristic to estimate the number of visited nodes
(to configure the bloom filter)
* many threads can concurrently walk the graph. MEM_ROOT and Hash_set
are protected with a mutex, but walking doesn't need them
* up to 8 threads can concurrently load nodes into the cache,
nodes are partitioned into 8 mutexes (8 is chosen arbitrarily, might
need tuning)
* concurrent editing is not supported though
* this is fine for MyISAM, TL_WRITE protects the TABLE_SHARE and the
graph (note that TL_WRITE_CONCURRENT_INSERT is not allowed, because an
INSERT into the main table means multiple UPDATEs in the graph)
* InnoDB uses secondary transaction-level caches linked in a list in
in thd->ha_data via a fake handlerton
* on rollback the secondary cache is discarded, on commit nodes
from the secondary cache are invalidated in the shared cache
while it is exclusively locked
* on savepoint rollback both caches are flushed. this can be improved
in the future with a row visibility callback
* graph size is controlled by @@mhnsw_cache_size, the cache is flushed
when it reaches the threshold
2024-07-17 17:16:28 +02:00
|
|
|
memcpy(gref(), graph->file->ref, gref_len());
|
|
|
|
stored= true;
|
|
|
|
ctx->cache_node(this);
|
2024-06-13 23:24:51 +02:00
|
|
|
}
|
|
|
|
my_safe_afree(neighbor_blob, total_size);
|
mhnsw: inter-statement shared cache
* preserve the graph in memory between statements
* keep it in a TABLE_SHARE, available for concurrent searches
* nodes are generally read-only, walking the graph doesn't change them
* distance to target is cached, calculated only once
* SIMD-optimized bloom filter detects visited nodes
* nodes are stored in an array, not List, to better utilize bloom filter
* auto-adjusting heuristic to estimate the number of visited nodes
(to configure the bloom filter)
* many threads can concurrently walk the graph. MEM_ROOT and Hash_set
are protected with a mutex, but walking doesn't need them
* up to 8 threads can concurrently load nodes into the cache,
nodes are partitioned into 8 mutexes (8 is chosen arbitrarily, might
need tuning)
* concurrent editing is not supported though
* this is fine for MyISAM, TL_WRITE protects the TABLE_SHARE and the
graph (note that TL_WRITE_CONCURRENT_INSERT is not allowed, because an
INSERT into the main table means multiple UPDATEs in the graph)
* InnoDB uses secondary transaction-level caches linked in a list in
in thd->ha_data via a fake handlerton
* on rollback the secondary cache is discarded, on commit nodes
from the secondary cache are invalidated in the shared cache
while it is exclusively locked
* on savepoint rollback both caches are flushed. this can be improved
in the future with a row visibility callback
* graph size is controlled by @@mhnsw_cache_size, the cache is flushed
when it reaches the threshold
2024-07-17 17:16:28 +02:00
|
|
|
return err;
|
Initial HNSW implementation
This commit includes the work done in collaboration with Hugo Wen from
Amazon:
MDEV-33408 Alter HNSW graph storage and fix memory leak
This commit changes the way HNSW graph information is stored in the
second table. Instead of storing connections as separate records, it now
stores neighbors for each node, leading to significant performance
improvements and storage savings.
Comparing with the previous approach, the insert speed is 5 times faster,
search speed improves by 23%, and storage usage is reduced by 73%, based
on ann-benchmark tests with random-xs-20-euclidean and
random-s-100-euclidean datasets.
Additionally, in previous code, vector objects were not released after
use, resulting in excessive memory consumption (over 20GB for building
the index with 90,000 records), preventing tests with large datasets.
Now ensure that vectors are released appropriately during the insert and
search functions. Note there are still some vectors that need to be
cleaned up after search query completion. Needs to be addressed in a
future commit.
All new code of the whole pull request, including one or several files
that are either new files or modified ones, are contributed under the
BSD-new license. I am contributing on behalf of my employer Amazon Web
Services, Inc.
As well as the commit:
Introduce session variables to manage HNSW index parameters
Three variables:
hnsw_max_connection_per_layer
hnsw_ef_constructor
hnsw_ef_search
ann-benchmark tool is also updated to support these variables in commit
https://github.com/HugoWenTD/ann-benchmarks/commit/e09784e for branch
https://github.com/HugoWenTD/ann-benchmarks/tree/mariadb-configurable
All new code of the whole pull request, including one or several files
that are either new files or modified ones, are contributed under the
BSD-new license. I am contributing on behalf of my employer Amazon Web
Services, Inc.
Co-authored-by: Hugo Wen <wenhug@amazon.com>
2024-02-17 17:03:30 +02:00
|
|
|
}
|
|
|
|
|
mhnsw: inter-statement shared cache
* preserve the graph in memory between statements
* keep it in a TABLE_SHARE, available for concurrent searches
* nodes are generally read-only, walking the graph doesn't change them
* distance to target is cached, calculated only once
* SIMD-optimized bloom filter detects visited nodes
* nodes are stored in an array, not List, to better utilize bloom filter
* auto-adjusting heuristic to estimate the number of visited nodes
(to configure the bloom filter)
* many threads can concurrently walk the graph. MEM_ROOT and Hash_set
are protected with a mutex, but walking doesn't need them
* up to 8 threads can concurrently load nodes into the cache,
nodes are partitioned into 8 mutexes (8 is chosen arbitrarily, might
need tuning)
* concurrent editing is not supported though
* this is fine for MyISAM, TL_WRITE protects the TABLE_SHARE and the
graph (note that TL_WRITE_CONCURRENT_INSERT is not allowed, because an
INSERT into the main table means multiple UPDATEs in the graph)
* InnoDB uses secondary transaction-level caches linked in a list in
in thd->ha_data via a fake handlerton
* on rollback the secondary cache is discarded, on commit nodes
from the secondary cache are invalidated in the shared cache
while it is exclusively locked
* on savepoint rollback both caches are flushed. this can be improved
in the future with a row visibility callback
* graph size is controlled by @@mhnsw_cache_size, the cache is flushed
when it reaches the threshold
2024-07-17 17:16:28 +02:00
|
|
|
static int update_second_degree_neighbors(MHNSW_Context *ctx, TABLE *graph,
|
|
|
|
size_t layer, FVectorNode *node)
|
Initial HNSW implementation
This commit includes the work done in collaboration with Hugo Wen from
Amazon:
MDEV-33408 Alter HNSW graph storage and fix memory leak
This commit changes the way HNSW graph information is stored in the
second table. Instead of storing connections as separate records, it now
stores neighbors for each node, leading to significant performance
improvements and storage savings.
Comparing with the previous approach, the insert speed is 5 times faster,
search speed improves by 23%, and storage usage is reduced by 73%, based
on ann-benchmark tests with random-xs-20-euclidean and
random-s-100-euclidean datasets.
Additionally, in previous code, vector objects were not released after
use, resulting in excessive memory consumption (over 20GB for building
the index with 90,000 records), preventing tests with large datasets.
Now ensure that vectors are released appropriately during the insert and
search functions. Note there are still some vectors that need to be
cleaned up after search query completion. Needs to be addressed in a
future commit.
All new code of the whole pull request, including one or several files
that are either new files or modified ones, are contributed under the
BSD-new license. I am contributing on behalf of my employer Amazon Web
Services, Inc.
As well as the commit:
Introduce session variables to manage HNSW index parameters
Three variables:
hnsw_max_connection_per_layer
hnsw_ef_constructor
hnsw_ef_search
ann-benchmark tool is also updated to support these variables in commit
https://github.com/HugoWenTD/ann-benchmarks/commit/e09784e for branch
https://github.com/HugoWenTD/ann-benchmarks/tree/mariadb-configurable
All new code of the whole pull request, including one or several files
that are either new files or modified ones, are contributed under the
BSD-new license. I am contributing on behalf of my employer Amazon Web
Services, Inc.
Co-authored-by: Hugo Wen <wenhug@amazon.com>
2024-02-17 17:03:30 +02:00
|
|
|
{
|
mhnsw: inter-statement shared cache
* preserve the graph in memory between statements
* keep it in a TABLE_SHARE, available for concurrent searches
* nodes are generally read-only, walking the graph doesn't change them
* distance to target is cached, calculated only once
* SIMD-optimized bloom filter detects visited nodes
* nodes are stored in an array, not List, to better utilize bloom filter
* auto-adjusting heuristic to estimate the number of visited nodes
(to configure the bloom filter)
* many threads can concurrently walk the graph. MEM_ROOT and Hash_set
are protected with a mutex, but walking doesn't need them
* up to 8 threads can concurrently load nodes into the cache,
nodes are partitioned into 8 mutexes (8 is chosen arbitrarily, might
need tuning)
* concurrent editing is not supported though
* this is fine for MyISAM, TL_WRITE protects the TABLE_SHARE and the
graph (note that TL_WRITE_CONCURRENT_INSERT is not allowed, because an
INSERT into the main table means multiple UPDATEs in the graph)
* InnoDB uses secondary transaction-level caches linked in a list in
in thd->ha_data via a fake handlerton
* on rollback the secondary cache is discarded, on commit nodes
from the secondary cache are invalidated in the shared cache
while it is exclusively locked
* on savepoint rollback both caches are flushed. this can be improved
in the future with a row visibility callback
* graph size is controlled by @@mhnsw_cache_size, the cache is flushed
when it reaches the threshold
2024-07-17 17:16:28 +02:00
|
|
|
const uint max_neighbors= ctx->max_neighbors(layer);
|
|
|
|
// it seems that one could update nodes in the gref order
|
|
|
|
// to avoid InnoDB deadlocks, but it produces no noticeable effect
|
|
|
|
for (size_t i=0; i < node->neighbors[layer].num; i++)
|
Initial HNSW implementation
This commit includes the work done in collaboration with Hugo Wen from
Amazon:
MDEV-33408 Alter HNSW graph storage and fix memory leak
This commit changes the way HNSW graph information is stored in the
second table. Instead of storing connections as separate records, it now
stores neighbors for each node, leading to significant performance
improvements and storage savings.
Comparing with the previous approach, the insert speed is 5 times faster,
search speed improves by 23%, and storage usage is reduced by 73%, based
on ann-benchmark tests with random-xs-20-euclidean and
random-s-100-euclidean datasets.
Additionally, in previous code, vector objects were not released after
use, resulting in excessive memory consumption (over 20GB for building
the index with 90,000 records), preventing tests with large datasets.
Now ensure that vectors are released appropriately during the insert and
search functions. Note there are still some vectors that need to be
cleaned up after search query completion. Needs to be addressed in a
future commit.
All new code of the whole pull request, including one or several files
that are either new files or modified ones, are contributed under the
BSD-new license. I am contributing on behalf of my employer Amazon Web
Services, Inc.
As well as the commit:
Introduce session variables to manage HNSW index parameters
Three variables:
hnsw_max_connection_per_layer
hnsw_ef_constructor
hnsw_ef_search
ann-benchmark tool is also updated to support these variables in commit
https://github.com/HugoWenTD/ann-benchmarks/commit/e09784e for branch
https://github.com/HugoWenTD/ann-benchmarks/tree/mariadb-configurable
All new code of the whole pull request, including one or several files
that are either new files or modified ones, are contributed under the
BSD-new license. I am contributing on behalf of my employer Amazon Web
Services, Inc.
Co-authored-by: Hugo Wen <wenhug@amazon.com>
2024-02-17 17:03:30 +02:00
|
|
|
{
|
mhnsw: inter-statement shared cache
* preserve the graph in memory between statements
* keep it in a TABLE_SHARE, available for concurrent searches
* nodes are generally read-only, walking the graph doesn't change them
* distance to target is cached, calculated only once
* SIMD-optimized bloom filter detects visited nodes
* nodes are stored in an array, not List, to better utilize bloom filter
* auto-adjusting heuristic to estimate the number of visited nodes
(to configure the bloom filter)
* many threads can concurrently walk the graph. MEM_ROOT and Hash_set
are protected with a mutex, but walking doesn't need them
* up to 8 threads can concurrently load nodes into the cache,
nodes are partitioned into 8 mutexes (8 is chosen arbitrarily, might
need tuning)
* concurrent editing is not supported though
* this is fine for MyISAM, TL_WRITE protects the TABLE_SHARE and the
graph (note that TL_WRITE_CONCURRENT_INSERT is not allowed, because an
INSERT into the main table means multiple UPDATEs in the graph)
* InnoDB uses secondary transaction-level caches linked in a list in
in thd->ha_data via a fake handlerton
* on rollback the secondary cache is discarded, on commit nodes
from the secondary cache are invalidated in the shared cache
while it is exclusively locked
* on savepoint rollback both caches are flushed. this can be improved
in the future with a row visibility callback
* graph size is controlled by @@mhnsw_cache_size, the cache is flushed
when it reaches the threshold
2024-07-17 17:16:28 +02:00
|
|
|
FVectorNode *neigh= node->neighbors[layer].links[i];
|
|
|
|
Neighborhood &neighneighbors= neigh->neighbors[layer];
|
|
|
|
if (neighneighbors.num < max_neighbors)
|
|
|
|
neigh->push_neighbor(layer, node);
|
|
|
|
else
|
|
|
|
if (int err= select_neighbors(ctx, graph, layer, *neigh, neighneighbors,
|
|
|
|
node, max_neighbors))
|
|
|
|
return err;
|
|
|
|
if (int err= neigh->save(graph))
|
|
|
|
return err;
|
Initial HNSW implementation
This commit includes the work done in collaboration with Hugo Wen from
Amazon:
MDEV-33408 Alter HNSW graph storage and fix memory leak
This commit changes the way HNSW graph information is stored in the
second table. Instead of storing connections as separate records, it now
stores neighbors for each node, leading to significant performance
improvements and storage savings.
Comparing with the previous approach, the insert speed is 5 times faster,
search speed improves by 23%, and storage usage is reduced by 73%, based
on ann-benchmark tests with random-xs-20-euclidean and
random-s-100-euclidean datasets.
Additionally, in previous code, vector objects were not released after
use, resulting in excessive memory consumption (over 20GB for building
the index with 90,000 records), preventing tests with large datasets.
Now ensure that vectors are released appropriately during the insert and
search functions. Note there are still some vectors that need to be
cleaned up after search query completion. Needs to be addressed in a
future commit.
All new code of the whole pull request, including one or several files
that are either new files or modified ones, are contributed under the
BSD-new license. I am contributing on behalf of my employer Amazon Web
Services, Inc.
As well as the commit:
Introduce session variables to manage HNSW index parameters
Three variables:
hnsw_max_connection_per_layer
hnsw_ef_constructor
hnsw_ef_search
ann-benchmark tool is also updated to support these variables in commit
https://github.com/HugoWenTD/ann-benchmarks/commit/e09784e for branch
https://github.com/HugoWenTD/ann-benchmarks/tree/mariadb-configurable
All new code of the whole pull request, including one or several files
that are either new files or modified ones, are contributed under the
BSD-new license. I am contributing on behalf of my employer Amazon Web
Services, Inc.
Co-authored-by: Hugo Wen <wenhug@amazon.com>
2024-02-17 17:03:30 +02:00
|
|
|
}
|
2024-06-13 23:24:51 +02:00
|
|
|
return 0;
|
initial support for vector indexes
MDEV-33407 Parser support for vector indexes
The syntax is
create table t1 (... vector index (v) ...);
limitation:
* v is a binary string and NOT NULL
* only one vector index per table
* temporary tables are not supported
MDEV-33404 Engine-independent indexes: subtable method
added support for so-called "high level indexes", they are not visible
to the storage engine, implemented on the sql level. For every such
an index in a table, say, t1, the server implicitly creates a second
table named, like, t1#i#05 (where "05" is the index number in t1).
This table has a fixed structure, no frm, not accessible directly,
doesn't go into the table cache, needs no MDLs.
MDEV-33406 basic optimizer support for k-NN searches
for a query like SELECT ... ORDER BY func() optimizer will use
item_func->part_of_sortkey() to decide what keys can be used
to resolve ORDER BY.
2024-01-17 15:32:45 +01:00
|
|
|
}
|
|
|
|
|
2024-07-19 12:25:25 +02:00
|
|
|
static int search_layer(MHNSW_Context *ctx, TABLE *graph, const FVector *target,
|
2024-07-22 21:24:11 +02:00
|
|
|
Neighborhood *start_nodes, uint result_size,
|
|
|
|
size_t layer, Neighborhood *result, bool construction)
|
Initial HNSW implementation
This commit includes the work done in collaboration with Hugo Wen from
Amazon:
MDEV-33408 Alter HNSW graph storage and fix memory leak
This commit changes the way HNSW graph information is stored in the
second table. Instead of storing connections as separate records, it now
stores neighbors for each node, leading to significant performance
improvements and storage savings.
Comparing with the previous approach, the insert speed is 5 times faster,
search speed improves by 23%, and storage usage is reduced by 73%, based
on ann-benchmark tests with random-xs-20-euclidean and
random-s-100-euclidean datasets.
Additionally, in previous code, vector objects were not released after
use, resulting in excessive memory consumption (over 20GB for building
the index with 90,000 records), preventing tests with large datasets.
Now ensure that vectors are released appropriately during the insert and
search functions. Note there are still some vectors that need to be
cleaned up after search query completion. Needs to be addressed in a
future commit.
All new code of the whole pull request, including one or several files
that are either new files or modified ones, are contributed under the
BSD-new license. I am contributing on behalf of my employer Amazon Web
Services, Inc.
As well as the commit:
Introduce session variables to manage HNSW index parameters
Three variables:
hnsw_max_connection_per_layer
hnsw_ef_constructor
hnsw_ef_search
ann-benchmark tool is also updated to support these variables in commit
https://github.com/HugoWenTD/ann-benchmarks/commit/e09784e for branch
https://github.com/HugoWenTD/ann-benchmarks/tree/mariadb-configurable
All new code of the whole pull request, including one or several files
that are either new files or modified ones, are contributed under the
BSD-new license. I am contributing on behalf of my employer Amazon Web
Services, Inc.
Co-authored-by: Hugo Wen <wenhug@amazon.com>
2024-02-17 17:03:30 +02:00
|
|
|
{
|
mhnsw: inter-statement shared cache
* preserve the graph in memory between statements
* keep it in a TABLE_SHARE, available for concurrent searches
* nodes are generally read-only, walking the graph doesn't change them
* distance to target is cached, calculated only once
* SIMD-optimized bloom filter detects visited nodes
* nodes are stored in an array, not List, to better utilize bloom filter
* auto-adjusting heuristic to estimate the number of visited nodes
(to configure the bloom filter)
* many threads can concurrently walk the graph. MEM_ROOT and Hash_set
are protected with a mutex, but walking doesn't need them
* up to 8 threads can concurrently load nodes into the cache,
nodes are partitioned into 8 mutexes (8 is chosen arbitrarily, might
need tuning)
* concurrent editing is not supported though
* this is fine for MyISAM, TL_WRITE protects the TABLE_SHARE and the
graph (note that TL_WRITE_CONCURRENT_INSERT is not allowed, because an
INSERT into the main table means multiple UPDATEs in the graph)
* InnoDB uses secondary transaction-level caches linked in a list in
in thd->ha_data via a fake handlerton
* on rollback the secondary cache is discarded, on commit nodes
from the secondary cache are invalidated in the shared cache
while it is exclusively locked
* on savepoint rollback both caches are flushed. this can be improved
in the future with a row visibility callback
* graph size is controlled by @@mhnsw_cache_size, the cache is flushed
when it reaches the threshold
2024-07-17 17:16:28 +02:00
|
|
|
DBUG_ASSERT(start_nodes->num > 0);
|
|
|
|
result->num= 0;
|
Initial HNSW implementation
This commit includes the work done in collaboration with Hugo Wen from
Amazon:
MDEV-33408 Alter HNSW graph storage and fix memory leak
This commit changes the way HNSW graph information is stored in the
second table. Instead of storing connections as separate records, it now
stores neighbors for each node, leading to significant performance
improvements and storage savings.
Comparing with the previous approach, the insert speed is 5 times faster,
search speed improves by 23%, and storage usage is reduced by 73%, based
on ann-benchmark tests with random-xs-20-euclidean and
random-s-100-euclidean datasets.
Additionally, in previous code, vector objects were not released after
use, resulting in excessive memory consumption (over 20GB for building
the index with 90,000 records), preventing tests with large datasets.
Now ensure that vectors are released appropriately during the insert and
search functions. Note there are still some vectors that need to be
cleaned up after search query completion. Needs to be addressed in a
future commit.
All new code of the whole pull request, including one or several files
that are either new files or modified ones, are contributed under the
BSD-new license. I am contributing on behalf of my employer Amazon Web
Services, Inc.
As well as the commit:
Introduce session variables to manage HNSW index parameters
Three variables:
hnsw_max_connection_per_layer
hnsw_ef_constructor
hnsw_ef_search
ann-benchmark tool is also updated to support these variables in commit
https://github.com/HugoWenTD/ann-benchmarks/commit/e09784e for branch
https://github.com/HugoWenTD/ann-benchmarks/tree/mariadb-configurable
All new code of the whole pull request, including one or several files
that are either new files or modified ones, are contributed under the
BSD-new license. I am contributing on behalf of my employer Amazon Web
Services, Inc.
Co-authored-by: Hugo Wen <wenhug@amazon.com>
2024-02-17 17:03:30 +02:00
|
|
|
|
mhnsw: inter-statement shared cache
* preserve the graph in memory between statements
* keep it in a TABLE_SHARE, available for concurrent searches
* nodes are generally read-only, walking the graph doesn't change them
* distance to target is cached, calculated only once
* SIMD-optimized bloom filter detects visited nodes
* nodes are stored in an array, not List, to better utilize bloom filter
* auto-adjusting heuristic to estimate the number of visited nodes
(to configure the bloom filter)
* many threads can concurrently walk the graph. MEM_ROOT and Hash_set
are protected with a mutex, but walking doesn't need them
* up to 8 threads can concurrently load nodes into the cache,
nodes are partitioned into 8 mutexes (8 is chosen arbitrarily, might
need tuning)
* concurrent editing is not supported though
* this is fine for MyISAM, TL_WRITE protects the TABLE_SHARE and the
graph (note that TL_WRITE_CONCURRENT_INSERT is not allowed, because an
INSERT into the main table means multiple UPDATEs in the graph)
* InnoDB uses secondary transaction-level caches linked in a list in
in thd->ha_data via a fake handlerton
* on rollback the secondary cache is discarded, on commit nodes
from the secondary cache are invalidated in the shared cache
while it is exclusively locked
* on savepoint rollback both caches are flushed. this can be improved
in the future with a row visibility callback
* graph size is controlled by @@mhnsw_cache_size, the cache is flushed
when it reaches the threshold
2024-07-17 17:16:28 +02:00
|
|
|
MEM_ROOT * const root= graph->in_use->mem_root;
|
2024-07-22 21:24:11 +02:00
|
|
|
Queue<Visited> candidates, best;
|
|
|
|
bool skip_deleted;
|
|
|
|
uint ef= result_size;
|
Initial HNSW implementation
This commit includes the work done in collaboration with Hugo Wen from
Amazon:
MDEV-33408 Alter HNSW graph storage and fix memory leak
This commit changes the way HNSW graph information is stored in the
second table. Instead of storing connections as separate records, it now
stores neighbors for each node, leading to significant performance
improvements and storage savings.
Comparing with the previous approach, the insert speed is 5 times faster,
search speed improves by 23%, and storage usage is reduced by 73%, based
on ann-benchmark tests with random-xs-20-euclidean and
random-s-100-euclidean datasets.
Additionally, in previous code, vector objects were not released after
use, resulting in excessive memory consumption (over 20GB for building
the index with 90,000 records), preventing tests with large datasets.
Now ensure that vectors are released appropriately during the insert and
search functions. Note there are still some vectors that need to be
cleaned up after search query completion. Needs to be addressed in a
future commit.
All new code of the whole pull request, including one or several files
that are either new files or modified ones, are contributed under the
BSD-new license. I am contributing on behalf of my employer Amazon Web
Services, Inc.
As well as the commit:
Introduce session variables to manage HNSW index parameters
Three variables:
hnsw_max_connection_per_layer
hnsw_ef_constructor
hnsw_ef_search
ann-benchmark tool is also updated to support these variables in commit
https://github.com/HugoWenTD/ann-benchmarks/commit/e09784e for branch
https://github.com/HugoWenTD/ann-benchmarks/tree/mariadb-configurable
All new code of the whole pull request, including one or several files
that are either new files or modified ones, are contributed under the
BSD-new license. I am contributing on behalf of my employer Amazon Web
Services, Inc.
Co-authored-by: Hugo Wen <wenhug@amazon.com>
2024-02-17 17:03:30 +02:00
|
|
|
|
2024-07-22 21:24:11 +02:00
|
|
|
if (construction)
|
|
|
|
{
|
|
|
|
skip_deleted= false;
|
|
|
|
if (ef > 1)
|
|
|
|
ef= std::max(ef_construction, ef);
|
|
|
|
}
|
|
|
|
else
|
|
|
|
{
|
|
|
|
skip_deleted= layer == 0;
|
|
|
|
if (ef > 1 || layer == 0)
|
|
|
|
ef= std::max(graph->in_use->variables.mhnsw_min_limit, ef);
|
|
|
|
}
|
Initial HNSW implementation
This commit includes the work done in collaboration with Hugo Wen from
Amazon:
MDEV-33408 Alter HNSW graph storage and fix memory leak
This commit changes the way HNSW graph information is stored in the
second table. Instead of storing connections as separate records, it now
stores neighbors for each node, leading to significant performance
improvements and storage savings.
Comparing with the previous approach, the insert speed is 5 times faster,
search speed improves by 23%, and storage usage is reduced by 73%, based
on ann-benchmark tests with random-xs-20-euclidean and
random-s-100-euclidean datasets.
Additionally, in previous code, vector objects were not released after
use, resulting in excessive memory consumption (over 20GB for building
the index with 90,000 records), preventing tests with large datasets.
Now ensure that vectors are released appropriately during the insert and
search functions. Note there are still some vectors that need to be
cleaned up after search query completion. Needs to be addressed in a
future commit.
All new code of the whole pull request, including one or several files
that are either new files or modified ones, are contributed under the
BSD-new license. I am contributing on behalf of my employer Amazon Web
Services, Inc.
As well as the commit:
Introduce session variables to manage HNSW index parameters
Three variables:
hnsw_max_connection_per_layer
hnsw_ef_constructor
hnsw_ef_search
ann-benchmark tool is also updated to support these variables in commit
https://github.com/HugoWenTD/ann-benchmarks/commit/e09784e for branch
https://github.com/HugoWenTD/ann-benchmarks/tree/mariadb-configurable
All new code of the whole pull request, including one or several files
that are either new files or modified ones, are contributed under the
BSD-new license. I am contributing on behalf of my employer Amazon Web
Services, Inc.
Co-authored-by: Hugo Wen <wenhug@amazon.com>
2024-02-17 17:03:30 +02:00
|
|
|
|
mhnsw: inter-statement shared cache
* preserve the graph in memory between statements
* keep it in a TABLE_SHARE, available for concurrent searches
* nodes are generally read-only, walking the graph doesn't change them
* distance to target is cached, calculated only once
* SIMD-optimized bloom filter detects visited nodes
* nodes are stored in an array, not List, to better utilize bloom filter
* auto-adjusting heuristic to estimate the number of visited nodes
(to configure the bloom filter)
* many threads can concurrently walk the graph. MEM_ROOT and Hash_set
are protected with a mutex, but walking doesn't need them
* up to 8 threads can concurrently load nodes into the cache,
nodes are partitioned into 8 mutexes (8 is chosen arbitrarily, might
need tuning)
* concurrent editing is not supported though
* this is fine for MyISAM, TL_WRITE protects the TABLE_SHARE and the
graph (note that TL_WRITE_CONCURRENT_INSERT is not allowed, because an
INSERT into the main table means multiple UPDATEs in the graph)
* InnoDB uses secondary transaction-level caches linked in a list in
in thd->ha_data via a fake handlerton
* on rollback the secondary cache is discarded, on commit nodes
from the secondary cache are invalidated in the shared cache
while it is exclusively locked
* on savepoint rollback both caches are flushed. this can be improved
in the future with a row visibility callback
* graph size is controlled by @@mhnsw_cache_size, the cache is flushed
when it reaches the threshold
2024-07-17 17:16:28 +02:00
|
|
|
// WARNING! heuristic here
|
|
|
|
const double est_heuristic= 8 * std::sqrt(ctx->max_neighbors(layer));
|
|
|
|
const uint est_size= static_cast<uint>(est_heuristic * std::pow(ef, ctx->ef_power));
|
|
|
|
VisitedSet visited(root, target, est_size);
|
|
|
|
|
|
|
|
candidates.init(10000, false, Visited::cmp);
|
|
|
|
best.init(ef, true, Visited::cmp);
|
|
|
|
|
2024-07-22 21:24:11 +02:00
|
|
|
DBUG_ASSERT(start_nodes->num <= result_size);
|
mhnsw: inter-statement shared cache
* preserve the graph in memory between statements
* keep it in a TABLE_SHARE, available for concurrent searches
* nodes are generally read-only, walking the graph doesn't change them
* distance to target is cached, calculated only once
* SIMD-optimized bloom filter detects visited nodes
* nodes are stored in an array, not List, to better utilize bloom filter
* auto-adjusting heuristic to estimate the number of visited nodes
(to configure the bloom filter)
* many threads can concurrently walk the graph. MEM_ROOT and Hash_set
are protected with a mutex, but walking doesn't need them
* up to 8 threads can concurrently load nodes into the cache,
nodes are partitioned into 8 mutexes (8 is chosen arbitrarily, might
need tuning)
* concurrent editing is not supported though
* this is fine for MyISAM, TL_WRITE protects the TABLE_SHARE and the
graph (note that TL_WRITE_CONCURRENT_INSERT is not allowed, because an
INSERT into the main table means multiple UPDATEs in the graph)
* InnoDB uses secondary transaction-level caches linked in a list in
in thd->ha_data via a fake handlerton
* on rollback the secondary cache is discarded, on commit nodes
from the secondary cache are invalidated in the shared cache
while it is exclusively locked
* on savepoint rollback both caches are flushed. this can be improved
in the future with a row visibility callback
* graph size is controlled by @@mhnsw_cache_size, the cache is flushed
when it reaches the threshold
2024-07-17 17:16:28 +02:00
|
|
|
for (size_t i=0; i < start_nodes->num; i++)
|
Initial HNSW implementation
This commit includes the work done in collaboration with Hugo Wen from
Amazon:
MDEV-33408 Alter HNSW graph storage and fix memory leak
This commit changes the way HNSW graph information is stored in the
second table. Instead of storing connections as separate records, it now
stores neighbors for each node, leading to significant performance
improvements and storage savings.
Comparing with the previous approach, the insert speed is 5 times faster,
search speed improves by 23%, and storage usage is reduced by 73%, based
on ann-benchmark tests with random-xs-20-euclidean and
random-s-100-euclidean datasets.
Additionally, in previous code, vector objects were not released after
use, resulting in excessive memory consumption (over 20GB for building
the index with 90,000 records), preventing tests with large datasets.
Now ensure that vectors are released appropriately during the insert and
search functions. Note there are still some vectors that need to be
cleaned up after search query completion. Needs to be addressed in a
future commit.
All new code of the whole pull request, including one or several files
that are either new files or modified ones, are contributed under the
BSD-new license. I am contributing on behalf of my employer Amazon Web
Services, Inc.
As well as the commit:
Introduce session variables to manage HNSW index parameters
Three variables:
hnsw_max_connection_per_layer
hnsw_ef_constructor
hnsw_ef_search
ann-benchmark tool is also updated to support these variables in commit
https://github.com/HugoWenTD/ann-benchmarks/commit/e09784e for branch
https://github.com/HugoWenTD/ann-benchmarks/tree/mariadb-configurable
All new code of the whole pull request, including one or several files
that are either new files or modified ones, are contributed under the
BSD-new license. I am contributing on behalf of my employer Amazon Web
Services, Inc.
Co-authored-by: Hugo Wen <wenhug@amazon.com>
2024-02-17 17:03:30 +02:00
|
|
|
{
|
mhnsw: inter-statement shared cache
* preserve the graph in memory between statements
* keep it in a TABLE_SHARE, available for concurrent searches
* nodes are generally read-only, walking the graph doesn't change them
* distance to target is cached, calculated only once
* SIMD-optimized bloom filter detects visited nodes
* nodes are stored in an array, not List, to better utilize bloom filter
* auto-adjusting heuristic to estimate the number of visited nodes
(to configure the bloom filter)
* many threads can concurrently walk the graph. MEM_ROOT and Hash_set
are protected with a mutex, but walking doesn't need them
* up to 8 threads can concurrently load nodes into the cache,
nodes are partitioned into 8 mutexes (8 is chosen arbitrarily, might
need tuning)
* concurrent editing is not supported though
* this is fine for MyISAM, TL_WRITE protects the TABLE_SHARE and the
graph (note that TL_WRITE_CONCURRENT_INSERT is not allowed, because an
INSERT into the main table means multiple UPDATEs in the graph)
* InnoDB uses secondary transaction-level caches linked in a list in
in thd->ha_data via a fake handlerton
* on rollback the secondary cache is discarded, on commit nodes
from the secondary cache are invalidated in the shared cache
while it is exclusively locked
* on savepoint rollback both caches are flushed. this can be improved
in the future with a row visibility callback
* graph size is controlled by @@mhnsw_cache_size, the cache is flushed
when it reaches the threshold
2024-07-17 17:16:28 +02:00
|
|
|
Visited *v= visited.create(start_nodes->links[i]);
|
|
|
|
candidates.push(v);
|
2024-07-16 15:15:17 +02:00
|
|
|
if (skip_deleted && v->node->deleted)
|
|
|
|
continue;
|
2024-07-22 21:24:11 +02:00
|
|
|
best.push(v);
|
Initial HNSW implementation
This commit includes the work done in collaboration with Hugo Wen from
Amazon:
MDEV-33408 Alter HNSW graph storage and fix memory leak
This commit changes the way HNSW graph information is stored in the
second table. Instead of storing connections as separate records, it now
stores neighbors for each node, leading to significant performance
improvements and storage savings.
Comparing with the previous approach, the insert speed is 5 times faster,
search speed improves by 23%, and storage usage is reduced by 73%, based
on ann-benchmark tests with random-xs-20-euclidean and
random-s-100-euclidean datasets.
Additionally, in previous code, vector objects were not released after
use, resulting in excessive memory consumption (over 20GB for building
the index with 90,000 records), preventing tests with large datasets.
Now ensure that vectors are released appropriately during the insert and
search functions. Note there are still some vectors that need to be
cleaned up after search query completion. Needs to be addressed in a
future commit.
All new code of the whole pull request, including one or several files
that are either new files or modified ones, are contributed under the
BSD-new license. I am contributing on behalf of my employer Amazon Web
Services, Inc.
As well as the commit:
Introduce session variables to manage HNSW index parameters
Three variables:
hnsw_max_connection_per_layer
hnsw_ef_constructor
hnsw_ef_search
ann-benchmark tool is also updated to support these variables in commit
https://github.com/HugoWenTD/ann-benchmarks/commit/e09784e for branch
https://github.com/HugoWenTD/ann-benchmarks/tree/mariadb-configurable
All new code of the whole pull request, including one or several files
that are either new files or modified ones, are contributed under the
BSD-new license. I am contributing on behalf of my employer Amazon Web
Services, Inc.
Co-authored-by: Hugo Wen <wenhug@amazon.com>
2024-02-17 17:03:30 +02:00
|
|
|
}
|
|
|
|
|
2024-07-16 15:15:17 +02:00
|
|
|
float furthest_best= FLT_MAX;
|
Initial HNSW implementation
This commit includes the work done in collaboration with Hugo Wen from
Amazon:
MDEV-33408 Alter HNSW graph storage and fix memory leak
This commit changes the way HNSW graph information is stored in the
second table. Instead of storing connections as separate records, it now
stores neighbors for each node, leading to significant performance
improvements and storage savings.
Comparing with the previous approach, the insert speed is 5 times faster,
search speed improves by 23%, and storage usage is reduced by 73%, based
on ann-benchmark tests with random-xs-20-euclidean and
random-s-100-euclidean datasets.
Additionally, in previous code, vector objects were not released after
use, resulting in excessive memory consumption (over 20GB for building
the index with 90,000 records), preventing tests with large datasets.
Now ensure that vectors are released appropriately during the insert and
search functions. Note there are still some vectors that need to be
cleaned up after search query completion. Needs to be addressed in a
future commit.
All new code of the whole pull request, including one or several files
that are either new files or modified ones, are contributed under the
BSD-new license. I am contributing on behalf of my employer Amazon Web
Services, Inc.
As well as the commit:
Introduce session variables to manage HNSW index parameters
Three variables:
hnsw_max_connection_per_layer
hnsw_ef_constructor
hnsw_ef_search
ann-benchmark tool is also updated to support these variables in commit
https://github.com/HugoWenTD/ann-benchmarks/commit/e09784e for branch
https://github.com/HugoWenTD/ann-benchmarks/tree/mariadb-configurable
All new code of the whole pull request, including one or several files
that are either new files or modified ones, are contributed under the
BSD-new license. I am contributing on behalf of my employer Amazon Web
Services, Inc.
Co-authored-by: Hugo Wen <wenhug@amazon.com>
2024-02-17 17:03:30 +02:00
|
|
|
while (candidates.elements())
|
|
|
|
{
|
mhnsw: inter-statement shared cache
* preserve the graph in memory between statements
* keep it in a TABLE_SHARE, available for concurrent searches
* nodes are generally read-only, walking the graph doesn't change them
* distance to target is cached, calculated only once
* SIMD-optimized bloom filter detects visited nodes
* nodes are stored in an array, not List, to better utilize bloom filter
* auto-adjusting heuristic to estimate the number of visited nodes
(to configure the bloom filter)
* many threads can concurrently walk the graph. MEM_ROOT and Hash_set
are protected with a mutex, but walking doesn't need them
* up to 8 threads can concurrently load nodes into the cache,
nodes are partitioned into 8 mutexes (8 is chosen arbitrarily, might
need tuning)
* concurrent editing is not supported though
* this is fine for MyISAM, TL_WRITE protects the TABLE_SHARE and the
graph (note that TL_WRITE_CONCURRENT_INSERT is not allowed, because an
INSERT into the main table means multiple UPDATEs in the graph)
* InnoDB uses secondary transaction-level caches linked in a list in
in thd->ha_data via a fake handlerton
* on rollback the secondary cache is discarded, on commit nodes
from the secondary cache are invalidated in the shared cache
while it is exclusively locked
* on savepoint rollback both caches are flushed. this can be improved
in the future with a row visibility callback
* graph size is controlled by @@mhnsw_cache_size, the cache is flushed
when it reaches the threshold
2024-07-17 17:16:28 +02:00
|
|
|
const Visited &cur= *candidates.pop();
|
2024-07-22 21:24:11 +02:00
|
|
|
if (cur.distance_to_target > furthest_best && best.is_full())
|
mhnsw: inter-statement shared cache
* preserve the graph in memory between statements
* keep it in a TABLE_SHARE, available for concurrent searches
* nodes are generally read-only, walking the graph doesn't change them
* distance to target is cached, calculated only once
* SIMD-optimized bloom filter detects visited nodes
* nodes are stored in an array, not List, to better utilize bloom filter
* auto-adjusting heuristic to estimate the number of visited nodes
(to configure the bloom filter)
* many threads can concurrently walk the graph. MEM_ROOT and Hash_set
are protected with a mutex, but walking doesn't need them
* up to 8 threads can concurrently load nodes into the cache,
nodes are partitioned into 8 mutexes (8 is chosen arbitrarily, might
need tuning)
* concurrent editing is not supported though
* this is fine for MyISAM, TL_WRITE protects the TABLE_SHARE and the
graph (note that TL_WRITE_CONCURRENT_INSERT is not allowed, because an
INSERT into the main table means multiple UPDATEs in the graph)
* InnoDB uses secondary transaction-level caches linked in a list in
in thd->ha_data via a fake handlerton
* on rollback the secondary cache is discarded, on commit nodes
from the secondary cache are invalidated in the shared cache
while it is exclusively locked
* on savepoint rollback both caches are flushed. this can be improved
in the future with a row visibility callback
* graph size is controlled by @@mhnsw_cache_size, the cache is flushed
when it reaches the threshold
2024-07-17 17:16:28 +02:00
|
|
|
break; // All possible candidates are worse than what we have
|
Initial HNSW implementation
This commit includes the work done in collaboration with Hugo Wen from
Amazon:
MDEV-33408 Alter HNSW graph storage and fix memory leak
This commit changes the way HNSW graph information is stored in the
second table. Instead of storing connections as separate records, it now
stores neighbors for each node, leading to significant performance
improvements and storage savings.
Comparing with the previous approach, the insert speed is 5 times faster,
search speed improves by 23%, and storage usage is reduced by 73%, based
on ann-benchmark tests with random-xs-20-euclidean and
random-s-100-euclidean datasets.
Additionally, in previous code, vector objects were not released after
use, resulting in excessive memory consumption (over 20GB for building
the index with 90,000 records), preventing tests with large datasets.
Now ensure that vectors are released appropriately during the insert and
search functions. Note there are still some vectors that need to be
cleaned up after search query completion. Needs to be addressed in a
future commit.
All new code of the whole pull request, including one or several files
that are either new files or modified ones, are contributed under the
BSD-new license. I am contributing on behalf of my employer Amazon Web
Services, Inc.
As well as the commit:
Introduce session variables to manage HNSW index parameters
Three variables:
hnsw_max_connection_per_layer
hnsw_ef_constructor
hnsw_ef_search
ann-benchmark tool is also updated to support these variables in commit
https://github.com/HugoWenTD/ann-benchmarks/commit/e09784e for branch
https://github.com/HugoWenTD/ann-benchmarks/tree/mariadb-configurable
All new code of the whole pull request, including one or several files
that are either new files or modified ones, are contributed under the
BSD-new license. I am contributing on behalf of my employer Amazon Web
Services, Inc.
Co-authored-by: Hugo Wen <wenhug@amazon.com>
2024-02-17 17:03:30 +02:00
|
|
|
|
mhnsw: inter-statement shared cache
* preserve the graph in memory between statements
* keep it in a TABLE_SHARE, available for concurrent searches
* nodes are generally read-only, walking the graph doesn't change them
* distance to target is cached, calculated only once
* SIMD-optimized bloom filter detects visited nodes
* nodes are stored in an array, not List, to better utilize bloom filter
* auto-adjusting heuristic to estimate the number of visited nodes
(to configure the bloom filter)
* many threads can concurrently walk the graph. MEM_ROOT and Hash_set
are protected with a mutex, but walking doesn't need them
* up to 8 threads can concurrently load nodes into the cache,
nodes are partitioned into 8 mutexes (8 is chosen arbitrarily, might
need tuning)
* concurrent editing is not supported though
* this is fine for MyISAM, TL_WRITE protects the TABLE_SHARE and the
graph (note that TL_WRITE_CONCURRENT_INSERT is not allowed, because an
INSERT into the main table means multiple UPDATEs in the graph)
* InnoDB uses secondary transaction-level caches linked in a list in
in thd->ha_data via a fake handlerton
* on rollback the secondary cache is discarded, on commit nodes
from the secondary cache are invalidated in the shared cache
while it is exclusively locked
* on savepoint rollback both caches are flushed. this can be improved
in the future with a row visibility callback
* graph size is controlled by @@mhnsw_cache_size, the cache is flushed
when it reaches the threshold
2024-07-17 17:16:28 +02:00
|
|
|
visited.flush();
|
|
|
|
|
|
|
|
Neighborhood &neighbors= cur.node->neighbors[layer];
|
|
|
|
FVectorNode **links= neighbors.links, **end= links + neighbors.num;
|
|
|
|
for (; links < end; links+= 8)
|
Initial HNSW implementation
This commit includes the work done in collaboration with Hugo Wen from
Amazon:
MDEV-33408 Alter HNSW graph storage and fix memory leak
This commit changes the way HNSW graph information is stored in the
second table. Instead of storing connections as separate records, it now
stores neighbors for each node, leading to significant performance
improvements and storage savings.
Comparing with the previous approach, the insert speed is 5 times faster,
search speed improves by 23%, and storage usage is reduced by 73%, based
on ann-benchmark tests with random-xs-20-euclidean and
random-s-100-euclidean datasets.
Additionally, in previous code, vector objects were not released after
use, resulting in excessive memory consumption (over 20GB for building
the index with 90,000 records), preventing tests with large datasets.
Now ensure that vectors are released appropriately during the insert and
search functions. Note there are still some vectors that need to be
cleaned up after search query completion. Needs to be addressed in a
future commit.
All new code of the whole pull request, including one or several files
that are either new files or modified ones, are contributed under the
BSD-new license. I am contributing on behalf of my employer Amazon Web
Services, Inc.
As well as the commit:
Introduce session variables to manage HNSW index parameters
Three variables:
hnsw_max_connection_per_layer
hnsw_ef_constructor
hnsw_ef_search
ann-benchmark tool is also updated to support these variables in commit
https://github.com/HugoWenTD/ann-benchmarks/commit/e09784e for branch
https://github.com/HugoWenTD/ann-benchmarks/tree/mariadb-configurable
All new code of the whole pull request, including one or several files
that are either new files or modified ones, are contributed under the
BSD-new license. I am contributing on behalf of my employer Amazon Web
Services, Inc.
Co-authored-by: Hugo Wen <wenhug@amazon.com>
2024-02-17 17:03:30 +02:00
|
|
|
{
|
mhnsw: inter-statement shared cache
* preserve the graph in memory between statements
* keep it in a TABLE_SHARE, available for concurrent searches
* nodes are generally read-only, walking the graph doesn't change them
* distance to target is cached, calculated only once
* SIMD-optimized bloom filter detects visited nodes
* nodes are stored in an array, not List, to better utilize bloom filter
* auto-adjusting heuristic to estimate the number of visited nodes
(to configure the bloom filter)
* many threads can concurrently walk the graph. MEM_ROOT and Hash_set
are protected with a mutex, but walking doesn't need them
* up to 8 threads can concurrently load nodes into the cache,
nodes are partitioned into 8 mutexes (8 is chosen arbitrarily, might
need tuning)
* concurrent editing is not supported though
* this is fine for MyISAM, TL_WRITE protects the TABLE_SHARE and the
graph (note that TL_WRITE_CONCURRENT_INSERT is not allowed, because an
INSERT into the main table means multiple UPDATEs in the graph)
* InnoDB uses secondary transaction-level caches linked in a list in
in thd->ha_data via a fake handlerton
* on rollback the secondary cache is discarded, on commit nodes
from the secondary cache are invalidated in the shared cache
while it is exclusively locked
* on savepoint rollback both caches are flushed. this can be improved
in the future with a row visibility callback
* graph size is controlled by @@mhnsw_cache_size, the cache is flushed
when it reaches the threshold
2024-07-17 17:16:28 +02:00
|
|
|
uint8_t res= visited.seen(links);
|
|
|
|
if (res == 0xff)
|
Initial HNSW implementation
This commit includes the work done in collaboration with Hugo Wen from
Amazon:
MDEV-33408 Alter HNSW graph storage and fix memory leak
This commit changes the way HNSW graph information is stored in the
second table. Instead of storing connections as separate records, it now
stores neighbors for each node, leading to significant performance
improvements and storage savings.
Comparing with the previous approach, the insert speed is 5 times faster,
search speed improves by 23%, and storage usage is reduced by 73%, based
on ann-benchmark tests with random-xs-20-euclidean and
random-s-100-euclidean datasets.
Additionally, in previous code, vector objects were not released after
use, resulting in excessive memory consumption (over 20GB for building
the index with 90,000 records), preventing tests with large datasets.
Now ensure that vectors are released appropriately during the insert and
search functions. Note there are still some vectors that need to be
cleaned up after search query completion. Needs to be addressed in a
future commit.
All new code of the whole pull request, including one or several files
that are either new files or modified ones, are contributed under the
BSD-new license. I am contributing on behalf of my employer Amazon Web
Services, Inc.
As well as the commit:
Introduce session variables to manage HNSW index parameters
Three variables:
hnsw_max_connection_per_layer
hnsw_ef_constructor
hnsw_ef_search
ann-benchmark tool is also updated to support these variables in commit
https://github.com/HugoWenTD/ann-benchmarks/commit/e09784e for branch
https://github.com/HugoWenTD/ann-benchmarks/tree/mariadb-configurable
All new code of the whole pull request, including one or several files
that are either new files or modified ones, are contributed under the
BSD-new license. I am contributing on behalf of my employer Amazon Web
Services, Inc.
Co-authored-by: Hugo Wen <wenhug@amazon.com>
2024-02-17 17:03:30 +02:00
|
|
|
continue;
|
|
|
|
|
mhnsw: inter-statement shared cache
* preserve the graph in memory between statements
* keep it in a TABLE_SHARE, available for concurrent searches
* nodes are generally read-only, walking the graph doesn't change them
* distance to target is cached, calculated only once
* SIMD-optimized bloom filter detects visited nodes
* nodes are stored in an array, not List, to better utilize bloom filter
* auto-adjusting heuristic to estimate the number of visited nodes
(to configure the bloom filter)
* many threads can concurrently walk the graph. MEM_ROOT and Hash_set
are protected with a mutex, but walking doesn't need them
* up to 8 threads can concurrently load nodes into the cache,
nodes are partitioned into 8 mutexes (8 is chosen arbitrarily, might
need tuning)
* concurrent editing is not supported though
* this is fine for MyISAM, TL_WRITE protects the TABLE_SHARE and the
graph (note that TL_WRITE_CONCURRENT_INSERT is not allowed, because an
INSERT into the main table means multiple UPDATEs in the graph)
* InnoDB uses secondary transaction-level caches linked in a list in
in thd->ha_data via a fake handlerton
* on rollback the secondary cache is discarded, on commit nodes
from the secondary cache are invalidated in the shared cache
while it is exclusively locked
* on savepoint rollback both caches are flushed. this can be improved
in the future with a row visibility callback
* graph size is controlled by @@mhnsw_cache_size, the cache is flushed
when it reaches the threshold
2024-07-17 17:16:28 +02:00
|
|
|
for (size_t i= 0; i < 8; i++)
|
Initial HNSW implementation
This commit includes the work done in collaboration with Hugo Wen from
Amazon:
MDEV-33408 Alter HNSW graph storage and fix memory leak
This commit changes the way HNSW graph information is stored in the
second table. Instead of storing connections as separate records, it now
stores neighbors for each node, leading to significant performance
improvements and storage savings.
Comparing with the previous approach, the insert speed is 5 times faster,
search speed improves by 23%, and storage usage is reduced by 73%, based
on ann-benchmark tests with random-xs-20-euclidean and
random-s-100-euclidean datasets.
Additionally, in previous code, vector objects were not released after
use, resulting in excessive memory consumption (over 20GB for building
the index with 90,000 records), preventing tests with large datasets.
Now ensure that vectors are released appropriately during the insert and
search functions. Note there are still some vectors that need to be
cleaned up after search query completion. Needs to be addressed in a
future commit.
All new code of the whole pull request, including one or several files
that are either new files or modified ones, are contributed under the
BSD-new license. I am contributing on behalf of my employer Amazon Web
Services, Inc.
As well as the commit:
Introduce session variables to manage HNSW index parameters
Three variables:
hnsw_max_connection_per_layer
hnsw_ef_constructor
hnsw_ef_search
ann-benchmark tool is also updated to support these variables in commit
https://github.com/HugoWenTD/ann-benchmarks/commit/e09784e for branch
https://github.com/HugoWenTD/ann-benchmarks/tree/mariadb-configurable
All new code of the whole pull request, including one or several files
that are either new files or modified ones, are contributed under the
BSD-new license. I am contributing on behalf of my employer Amazon Web
Services, Inc.
Co-authored-by: Hugo Wen <wenhug@amazon.com>
2024-02-17 17:03:30 +02:00
|
|
|
{
|
mhnsw: inter-statement shared cache
* preserve the graph in memory between statements
* keep it in a TABLE_SHARE, available for concurrent searches
* nodes are generally read-only, walking the graph doesn't change them
* distance to target is cached, calculated only once
* SIMD-optimized bloom filter detects visited nodes
* nodes are stored in an array, not List, to better utilize bloom filter
* auto-adjusting heuristic to estimate the number of visited nodes
(to configure the bloom filter)
* many threads can concurrently walk the graph. MEM_ROOT and Hash_set
are protected with a mutex, but walking doesn't need them
* up to 8 threads can concurrently load nodes into the cache,
nodes are partitioned into 8 mutexes (8 is chosen arbitrarily, might
need tuning)
* concurrent editing is not supported though
* this is fine for MyISAM, TL_WRITE protects the TABLE_SHARE and the
graph (note that TL_WRITE_CONCURRENT_INSERT is not allowed, because an
INSERT into the main table means multiple UPDATEs in the graph)
* InnoDB uses secondary transaction-level caches linked in a list in
in thd->ha_data via a fake handlerton
* on rollback the secondary cache is discarded, on commit nodes
from the secondary cache are invalidated in the shared cache
while it is exclusively locked
* on savepoint rollback both caches are flushed. this can be improved
in the future with a row visibility callback
* graph size is controlled by @@mhnsw_cache_size, the cache is flushed
when it reaches the threshold
2024-07-17 17:16:28 +02:00
|
|
|
if (res & (1 << i))
|
|
|
|
continue;
|
|
|
|
if (int err= links[i]->load(graph))
|
|
|
|
return err;
|
|
|
|
Visited *v= visited.create(links[i]);
|
2024-07-22 21:24:11 +02:00
|
|
|
if (!best.is_full())
|
mhnsw: inter-statement shared cache
* preserve the graph in memory between statements
* keep it in a TABLE_SHARE, available for concurrent searches
* nodes are generally read-only, walking the graph doesn't change them
* distance to target is cached, calculated only once
* SIMD-optimized bloom filter detects visited nodes
* nodes are stored in an array, not List, to better utilize bloom filter
* auto-adjusting heuristic to estimate the number of visited nodes
(to configure the bloom filter)
* many threads can concurrently walk the graph. MEM_ROOT and Hash_set
are protected with a mutex, but walking doesn't need them
* up to 8 threads can concurrently load nodes into the cache,
nodes are partitioned into 8 mutexes (8 is chosen arbitrarily, might
need tuning)
* concurrent editing is not supported though
* this is fine for MyISAM, TL_WRITE protects the TABLE_SHARE and the
graph (note that TL_WRITE_CONCURRENT_INSERT is not allowed, because an
INSERT into the main table means multiple UPDATEs in the graph)
* InnoDB uses secondary transaction-level caches linked in a list in
in thd->ha_data via a fake handlerton
* on rollback the secondary cache is discarded, on commit nodes
from the secondary cache are invalidated in the shared cache
while it is exclusively locked
* on savepoint rollback both caches are flushed. this can be improved
in the future with a row visibility callback
* graph size is controlled by @@mhnsw_cache_size, the cache is flushed
when it reaches the threshold
2024-07-17 17:16:28 +02:00
|
|
|
{
|
|
|
|
candidates.push(v);
|
2024-07-16 15:15:17 +02:00
|
|
|
if (skip_deleted && v->node->deleted)
|
|
|
|
continue;
|
mhnsw: inter-statement shared cache
* preserve the graph in memory between statements
* keep it in a TABLE_SHARE, available for concurrent searches
* nodes are generally read-only, walking the graph doesn't change them
* distance to target is cached, calculated only once
* SIMD-optimized bloom filter detects visited nodes
* nodes are stored in an array, not List, to better utilize bloom filter
* auto-adjusting heuristic to estimate the number of visited nodes
(to configure the bloom filter)
* many threads can concurrently walk the graph. MEM_ROOT and Hash_set
are protected with a mutex, but walking doesn't need them
* up to 8 threads can concurrently load nodes into the cache,
nodes are partitioned into 8 mutexes (8 is chosen arbitrarily, might
need tuning)
* concurrent editing is not supported though
* this is fine for MyISAM, TL_WRITE protects the TABLE_SHARE and the
graph (note that TL_WRITE_CONCURRENT_INSERT is not allowed, because an
INSERT into the main table means multiple UPDATEs in the graph)
* InnoDB uses secondary transaction-level caches linked in a list in
in thd->ha_data via a fake handlerton
* on rollback the secondary cache is discarded, on commit nodes
from the secondary cache are invalidated in the shared cache
while it is exclusively locked
* on savepoint rollback both caches are flushed. this can be improved
in the future with a row visibility callback
* graph size is controlled by @@mhnsw_cache_size, the cache is flushed
when it reaches the threshold
2024-07-17 17:16:28 +02:00
|
|
|
best.push(v);
|
|
|
|
furthest_best= best.top()->distance_to_target;
|
|
|
|
}
|
|
|
|
else if (v->distance_to_target < furthest_best)
|
|
|
|
{
|
|
|
|
candidates.push(v);
|
2024-07-16 15:15:17 +02:00
|
|
|
if (skip_deleted && v->node->deleted)
|
|
|
|
continue;
|
|
|
|
best.replace_top(v);
|
mhnsw: inter-statement shared cache
* preserve the graph in memory between statements
* keep it in a TABLE_SHARE, available for concurrent searches
* nodes are generally read-only, walking the graph doesn't change them
* distance to target is cached, calculated only once
* SIMD-optimized bloom filter detects visited nodes
* nodes are stored in an array, not List, to better utilize bloom filter
* auto-adjusting heuristic to estimate the number of visited nodes
(to configure the bloom filter)
* many threads can concurrently walk the graph. MEM_ROOT and Hash_set
are protected with a mutex, but walking doesn't need them
* up to 8 threads can concurrently load nodes into the cache,
nodes are partitioned into 8 mutexes (8 is chosen arbitrarily, might
need tuning)
* concurrent editing is not supported though
* this is fine for MyISAM, TL_WRITE protects the TABLE_SHARE and the
graph (note that TL_WRITE_CONCURRENT_INSERT is not allowed, because an
INSERT into the main table means multiple UPDATEs in the graph)
* InnoDB uses secondary transaction-level caches linked in a list in
in thd->ha_data via a fake handlerton
* on rollback the secondary cache is discarded, on commit nodes
from the secondary cache are invalidated in the shared cache
while it is exclusively locked
* on savepoint rollback both caches are flushed. this can be improved
in the future with a row visibility callback
* graph size is controlled by @@mhnsw_cache_size, the cache is flushed
when it reaches the threshold
2024-07-17 17:16:28 +02:00
|
|
|
furthest_best= best.top()->distance_to_target;
|
|
|
|
}
|
Initial HNSW implementation
This commit includes the work done in collaboration with Hugo Wen from
Amazon:
MDEV-33408 Alter HNSW graph storage and fix memory leak
This commit changes the way HNSW graph information is stored in the
second table. Instead of storing connections as separate records, it now
stores neighbors for each node, leading to significant performance
improvements and storage savings.
Comparing with the previous approach, the insert speed is 5 times faster,
search speed improves by 23%, and storage usage is reduced by 73%, based
on ann-benchmark tests with random-xs-20-euclidean and
random-s-100-euclidean datasets.
Additionally, in previous code, vector objects were not released after
use, resulting in excessive memory consumption (over 20GB for building
the index with 90,000 records), preventing tests with large datasets.
Now ensure that vectors are released appropriately during the insert and
search functions. Note there are still some vectors that need to be
cleaned up after search query completion. Needs to be addressed in a
future commit.
All new code of the whole pull request, including one or several files
that are either new files or modified ones, are contributed under the
BSD-new license. I am contributing on behalf of my employer Amazon Web
Services, Inc.
As well as the commit:
Introduce session variables to manage HNSW index parameters
Three variables:
hnsw_max_connection_per_layer
hnsw_ef_constructor
hnsw_ef_search
ann-benchmark tool is also updated to support these variables in commit
https://github.com/HugoWenTD/ann-benchmarks/commit/e09784e for branch
https://github.com/HugoWenTD/ann-benchmarks/tree/mariadb-configurable
All new code of the whole pull request, including one or several files
that are either new files or modified ones, are contributed under the
BSD-new license. I am contributing on behalf of my employer Amazon Web
Services, Inc.
Co-authored-by: Hugo Wen <wenhug@amazon.com>
2024-02-17 17:03:30 +02:00
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
mhnsw: inter-statement shared cache
* preserve the graph in memory between statements
* keep it in a TABLE_SHARE, available for concurrent searches
* nodes are generally read-only, walking the graph doesn't change them
* distance to target is cached, calculated only once
* SIMD-optimized bloom filter detects visited nodes
* nodes are stored in an array, not List, to better utilize bloom filter
* auto-adjusting heuristic to estimate the number of visited nodes
(to configure the bloom filter)
* many threads can concurrently walk the graph. MEM_ROOT and Hash_set
are protected with a mutex, but walking doesn't need them
* up to 8 threads can concurrently load nodes into the cache,
nodes are partitioned into 8 mutexes (8 is chosen arbitrarily, might
need tuning)
* concurrent editing is not supported though
* this is fine for MyISAM, TL_WRITE protects the TABLE_SHARE and the
graph (note that TL_WRITE_CONCURRENT_INSERT is not allowed, because an
INSERT into the main table means multiple UPDATEs in the graph)
* InnoDB uses secondary transaction-level caches linked in a list in
in thd->ha_data via a fake handlerton
* on rollback the secondary cache is discarded, on commit nodes
from the secondary cache are invalidated in the shared cache
while it is exclusively locked
* on savepoint rollback both caches are flushed. this can be improved
in the future with a row visibility callback
* graph size is controlled by @@mhnsw_cache_size, the cache is flushed
when it reaches the threshold
2024-07-17 17:16:28 +02:00
|
|
|
if (ef > 1 && visited.count*2 > est_size)
|
|
|
|
{
|
|
|
|
double ef_power= std::log(visited.count*2/est_heuristic) / std::log(ef);
|
|
|
|
set_if_bigger(ctx->ef_power, ef_power); // not atomic, but it's ok
|
|
|
|
}
|
Initial HNSW implementation
This commit includes the work done in collaboration with Hugo Wen from
Amazon:
MDEV-33408 Alter HNSW graph storage and fix memory leak
This commit changes the way HNSW graph information is stored in the
second table. Instead of storing connections as separate records, it now
stores neighbors for each node, leading to significant performance
improvements and storage savings.
Comparing with the previous approach, the insert speed is 5 times faster,
search speed improves by 23%, and storage usage is reduced by 73%, based
on ann-benchmark tests with random-xs-20-euclidean and
random-s-100-euclidean datasets.
Additionally, in previous code, vector objects were not released after
use, resulting in excessive memory consumption (over 20GB for building
the index with 90,000 records), preventing tests with large datasets.
Now ensure that vectors are released appropriately during the insert and
search functions. Note there are still some vectors that need to be
cleaned up after search query completion. Needs to be addressed in a
future commit.
All new code of the whole pull request, including one or several files
that are either new files or modified ones, are contributed under the
BSD-new license. I am contributing on behalf of my employer Amazon Web
Services, Inc.
As well as the commit:
Introduce session variables to manage HNSW index parameters
Three variables:
hnsw_max_connection_per_layer
hnsw_ef_constructor
hnsw_ef_search
ann-benchmark tool is also updated to support these variables in commit
https://github.com/HugoWenTD/ann-benchmarks/commit/e09784e for branch
https://github.com/HugoWenTD/ann-benchmarks/tree/mariadb-configurable
All new code of the whole pull request, including one or several files
that are either new files or modified ones, are contributed under the
BSD-new license. I am contributing on behalf of my employer Amazon Web
Services, Inc.
Co-authored-by: Hugo Wen <wenhug@amazon.com>
2024-02-17 17:03:30 +02:00
|
|
|
|
2024-07-22 21:24:11 +02:00
|
|
|
while (best.elements() > result_size)
|
|
|
|
best.pop();
|
|
|
|
|
mhnsw: inter-statement shared cache
* preserve the graph in memory between statements
* keep it in a TABLE_SHARE, available for concurrent searches
* nodes are generally read-only, walking the graph doesn't change them
* distance to target is cached, calculated only once
* SIMD-optimized bloom filter detects visited nodes
* nodes are stored in an array, not List, to better utilize bloom filter
* auto-adjusting heuristic to estimate the number of visited nodes
(to configure the bloom filter)
* many threads can concurrently walk the graph. MEM_ROOT and Hash_set
are protected with a mutex, but walking doesn't need them
* up to 8 threads can concurrently load nodes into the cache,
nodes are partitioned into 8 mutexes (8 is chosen arbitrarily, might
need tuning)
* concurrent editing is not supported though
* this is fine for MyISAM, TL_WRITE protects the TABLE_SHARE and the
graph (note that TL_WRITE_CONCURRENT_INSERT is not allowed, because an
INSERT into the main table means multiple UPDATEs in the graph)
* InnoDB uses secondary transaction-level caches linked in a list in
in thd->ha_data via a fake handlerton
* on rollback the secondary cache is discarded, on commit nodes
from the secondary cache are invalidated in the shared cache
while it is exclusively locked
* on savepoint rollback both caches are flushed. this can be improved
in the future with a row visibility callback
* graph size is controlled by @@mhnsw_cache_size, the cache is flushed
when it reaches the threshold
2024-07-17 17:16:28 +02:00
|
|
|
result->num= best.elements();
|
|
|
|
for (FVectorNode **links= result->links + result->num; best.elements();)
|
|
|
|
*--links= best.pop()->node;
|
Initial HNSW implementation
This commit includes the work done in collaboration with Hugo Wen from
Amazon:
MDEV-33408 Alter HNSW graph storage and fix memory leak
This commit changes the way HNSW graph information is stored in the
second table. Instead of storing connections as separate records, it now
stores neighbors for each node, leading to significant performance
improvements and storage savings.
Comparing with the previous approach, the insert speed is 5 times faster,
search speed improves by 23%, and storage usage is reduced by 73%, based
on ann-benchmark tests with random-xs-20-euclidean and
random-s-100-euclidean datasets.
Additionally, in previous code, vector objects were not released after
use, resulting in excessive memory consumption (over 20GB for building
the index with 90,000 records), preventing tests with large datasets.
Now ensure that vectors are released appropriately during the insert and
search functions. Note there are still some vectors that need to be
cleaned up after search query completion. Needs to be addressed in a
future commit.
All new code of the whole pull request, including one or several files
that are either new files or modified ones, are contributed under the
BSD-new license. I am contributing on behalf of my employer Amazon Web
Services, Inc.
As well as the commit:
Introduce session variables to manage HNSW index parameters
Three variables:
hnsw_max_connection_per_layer
hnsw_ef_constructor
hnsw_ef_search
ann-benchmark tool is also updated to support these variables in commit
https://github.com/HugoWenTD/ann-benchmarks/commit/e09784e for branch
https://github.com/HugoWenTD/ann-benchmarks/tree/mariadb-configurable
All new code of the whole pull request, including one or several files
that are either new files or modified ones, are contributed under the
BSD-new license. I am contributing on behalf of my employer Amazon Web
Services, Inc.
Co-authored-by: Hugo Wen <wenhug@amazon.com>
2024-02-17 17:03:30 +02:00
|
|
|
|
2024-06-13 23:24:51 +02:00
|
|
|
return 0;
|
Initial HNSW implementation
This commit includes the work done in collaboration with Hugo Wen from
Amazon:
MDEV-33408 Alter HNSW graph storage and fix memory leak
This commit changes the way HNSW graph information is stored in the
second table. Instead of storing connections as separate records, it now
stores neighbors for each node, leading to significant performance
improvements and storage savings.
Comparing with the previous approach, the insert speed is 5 times faster,
search speed improves by 23%, and storage usage is reduced by 73%, based
on ann-benchmark tests with random-xs-20-euclidean and
random-s-100-euclidean datasets.
Additionally, in previous code, vector objects were not released after
use, resulting in excessive memory consumption (over 20GB for building
the index with 90,000 records), preventing tests with large datasets.
Now ensure that vectors are released appropriately during the insert and
search functions. Note there are still some vectors that need to be
cleaned up after search query completion. Needs to be addressed in a
future commit.
All new code of the whole pull request, including one or several files
that are either new files or modified ones, are contributed under the
BSD-new license. I am contributing on behalf of my employer Amazon Web
Services, Inc.
As well as the commit:
Introduce session variables to manage HNSW index parameters
Three variables:
hnsw_max_connection_per_layer
hnsw_ef_constructor
hnsw_ef_search
ann-benchmark tool is also updated to support these variables in commit
https://github.com/HugoWenTD/ann-benchmarks/commit/e09784e for branch
https://github.com/HugoWenTD/ann-benchmarks/tree/mariadb-configurable
All new code of the whole pull request, including one or several files
that are either new files or modified ones, are contributed under the
BSD-new license. I am contributing on behalf of my employer Amazon Web
Services, Inc.
Co-authored-by: Hugo Wen <wenhug@amazon.com>
2024-02-17 17:03:30 +02:00
|
|
|
}
|
|
|
|
|
|
|
|
|
2024-06-01 00:17:05 +02:00
|
|
|
static int bad_value_on_insert(Field *f)
|
|
|
|
{
|
|
|
|
my_error(ER_TRUNCATED_WRONG_VALUE_FOR_FIELD, MYF(0), "vector", "...",
|
|
|
|
f->table->s->db.str, f->table->s->table_name.str, f->field_name.str,
|
|
|
|
f->table->in_use->get_stmt_da()->current_row_for_warning());
|
mhnsw: inter-statement shared cache
* preserve the graph in memory between statements
* keep it in a TABLE_SHARE, available for concurrent searches
* nodes are generally read-only, walking the graph doesn't change them
* distance to target is cached, calculated only once
* SIMD-optimized bloom filter detects visited nodes
* nodes are stored in an array, not List, to better utilize bloom filter
* auto-adjusting heuristic to estimate the number of visited nodes
(to configure the bloom filter)
* many threads can concurrently walk the graph. MEM_ROOT and Hash_set
are protected with a mutex, but walking doesn't need them
* up to 8 threads can concurrently load nodes into the cache,
nodes are partitioned into 8 mutexes (8 is chosen arbitrarily, might
need tuning)
* concurrent editing is not supported though
* this is fine for MyISAM, TL_WRITE protects the TABLE_SHARE and the
graph (note that TL_WRITE_CONCURRENT_INSERT is not allowed, because an
INSERT into the main table means multiple UPDATEs in the graph)
* InnoDB uses secondary transaction-level caches linked in a list in
in thd->ha_data via a fake handlerton
* on rollback the secondary cache is discarded, on commit nodes
from the secondary cache are invalidated in the shared cache
while it is exclusively locked
* on savepoint rollback both caches are flushed. this can be improved
in the future with a row visibility callback
* graph size is controlled by @@mhnsw_cache_size, the cache is flushed
when it reaches the threshold
2024-07-17 17:16:28 +02:00
|
|
|
return my_errno= HA_ERR_GENERIC;
|
2024-06-01 00:17:05 +02:00
|
|
|
}
|
|
|
|
|
Initial HNSW implementation
This commit includes the work done in collaboration with Hugo Wen from
Amazon:
MDEV-33408 Alter HNSW graph storage and fix memory leak
This commit changes the way HNSW graph information is stored in the
second table. Instead of storing connections as separate records, it now
stores neighbors for each node, leading to significant performance
improvements and storage savings.
Comparing with the previous approach, the insert speed is 5 times faster,
search speed improves by 23%, and storage usage is reduced by 73%, based
on ann-benchmark tests with random-xs-20-euclidean and
random-s-100-euclidean datasets.
Additionally, in previous code, vector objects were not released after
use, resulting in excessive memory consumption (over 20GB for building
the index with 90,000 records), preventing tests with large datasets.
Now ensure that vectors are released appropriately during the insert and
search functions. Note there are still some vectors that need to be
cleaned up after search query completion. Needs to be addressed in a
future commit.
All new code of the whole pull request, including one or several files
that are either new files or modified ones, are contributed under the
BSD-new license. I am contributing on behalf of my employer Amazon Web
Services, Inc.
As well as the commit:
Introduce session variables to manage HNSW index parameters
Three variables:
hnsw_max_connection_per_layer
hnsw_ef_constructor
hnsw_ef_search
ann-benchmark tool is also updated to support these variables in commit
https://github.com/HugoWenTD/ann-benchmarks/commit/e09784e for branch
https://github.com/HugoWenTD/ann-benchmarks/tree/mariadb-configurable
All new code of the whole pull request, including one or several files
that are either new files or modified ones, are contributed under the
BSD-new license. I am contributing on behalf of my employer Amazon Web
Services, Inc.
Co-authored-by: Hugo Wen <wenhug@amazon.com>
2024-02-17 17:03:30 +02:00
|
|
|
|
initial support for vector indexes
MDEV-33407 Parser support for vector indexes
The syntax is
create table t1 (... vector index (v) ...);
limitation:
* v is a binary string and NOT NULL
* only one vector index per table
* temporary tables are not supported
MDEV-33404 Engine-independent indexes: subtable method
added support for so-called "high level indexes", they are not visible
to the storage engine, implemented on the sql level. For every such
an index in a table, say, t1, the server implicitly creates a second
table named, like, t1#i#05 (where "05" is the index number in t1).
This table has a fixed structure, no frm, not accessible directly,
doesn't go into the table cache, needs no MDLs.
MDEV-33406 basic optimizer support for k-NN searches
for a query like SELECT ... ORDER BY func() optimizer will use
item_func->part_of_sortkey() to decide what keys can be used
to resolve ORDER BY.
2024-01-17 15:32:45 +01:00
|
|
|
int mhnsw_insert(TABLE *table, KEY *keyinfo)
|
|
|
|
{
|
2024-06-01 00:17:05 +02:00
|
|
|
THD *thd= table->in_use;
|
initial support for vector indexes
MDEV-33407 Parser support for vector indexes
The syntax is
create table t1 (... vector index (v) ...);
limitation:
* v is a binary string and NOT NULL
* only one vector index per table
* temporary tables are not supported
MDEV-33404 Engine-independent indexes: subtable method
added support for so-called "high level indexes", they are not visible
to the storage engine, implemented on the sql level. For every such
an index in a table, say, t1, the server implicitly creates a second
table named, like, t1#i#05 (where "05" is the index number in t1).
This table has a fixed structure, no frm, not accessible directly,
doesn't go into the table cache, needs no MDLs.
MDEV-33406 basic optimizer support for k-NN searches
for a query like SELECT ... ORDER BY func() optimizer will use
item_func->part_of_sortkey() to decide what keys can be used
to resolve ORDER BY.
2024-01-17 15:32:45 +01:00
|
|
|
TABLE *graph= table->hlindex;
|
|
|
|
MY_BITMAP *old_map= dbug_tmp_use_all_columns(table, &table->read_set);
|
Initial HNSW implementation
This commit includes the work done in collaboration with Hugo Wen from
Amazon:
MDEV-33408 Alter HNSW graph storage and fix memory leak
This commit changes the way HNSW graph information is stored in the
second table. Instead of storing connections as separate records, it now
stores neighbors for each node, leading to significant performance
improvements and storage savings.
Comparing with the previous approach, the insert speed is 5 times faster,
search speed improves by 23%, and storage usage is reduced by 73%, based
on ann-benchmark tests with random-xs-20-euclidean and
random-s-100-euclidean datasets.
Additionally, in previous code, vector objects were not released after
use, resulting in excessive memory consumption (over 20GB for building
the index with 90,000 records), preventing tests with large datasets.
Now ensure that vectors are released appropriately during the insert and
search functions. Note there are still some vectors that need to be
cleaned up after search query completion. Needs to be addressed in a
future commit.
All new code of the whole pull request, including one or several files
that are either new files or modified ones, are contributed under the
BSD-new license. I am contributing on behalf of my employer Amazon Web
Services, Inc.
As well as the commit:
Introduce session variables to manage HNSW index parameters
Three variables:
hnsw_max_connection_per_layer
hnsw_ef_constructor
hnsw_ef_search
ann-benchmark tool is also updated to support these variables in commit
https://github.com/HugoWenTD/ann-benchmarks/commit/e09784e for branch
https://github.com/HugoWenTD/ann-benchmarks/tree/mariadb-configurable
All new code of the whole pull request, including one or several files
that are either new files or modified ones, are contributed under the
BSD-new license. I am contributing on behalf of my employer Amazon Web
Services, Inc.
Co-authored-by: Hugo Wen <wenhug@amazon.com>
2024-02-17 17:03:30 +02:00
|
|
|
Field *vec_field= keyinfo->key_part->field;
|
|
|
|
String buf, *res= vec_field->val_str(&buf);
|
mhnsw: inter-statement shared cache
* preserve the graph in memory between statements
* keep it in a TABLE_SHARE, available for concurrent searches
* nodes are generally read-only, walking the graph doesn't change them
* distance to target is cached, calculated only once
* SIMD-optimized bloom filter detects visited nodes
* nodes are stored in an array, not List, to better utilize bloom filter
* auto-adjusting heuristic to estimate the number of visited nodes
(to configure the bloom filter)
* many threads can concurrently walk the graph. MEM_ROOT and Hash_set
are protected with a mutex, but walking doesn't need them
* up to 8 threads can concurrently load nodes into the cache,
nodes are partitioned into 8 mutexes (8 is chosen arbitrarily, might
need tuning)
* concurrent editing is not supported though
* this is fine for MyISAM, TL_WRITE protects the TABLE_SHARE and the
graph (note that TL_WRITE_CONCURRENT_INSERT is not allowed, because an
INSERT into the main table means multiple UPDATEs in the graph)
* InnoDB uses secondary transaction-level caches linked in a list in
in thd->ha_data via a fake handlerton
* on rollback the secondary cache is discarded, on commit nodes
from the secondary cache are invalidated in the shared cache
while it is exclusively locked
* on savepoint rollback both caches are flushed. this can be improved
in the future with a row visibility callback
* graph size is controlled by @@mhnsw_cache_size, the cache is flushed
when it reaches the threshold
2024-07-17 17:16:28 +02:00
|
|
|
MHNSW_Context *ctx;
|
initial support for vector indexes
MDEV-33407 Parser support for vector indexes
The syntax is
create table t1 (... vector index (v) ...);
limitation:
* v is a binary string and NOT NULL
* only one vector index per table
* temporary tables are not supported
MDEV-33404 Engine-independent indexes: subtable method
added support for so-called "high level indexes", they are not visible
to the storage engine, implemented on the sql level. For every such
an index in a table, say, t1, the server implicitly creates a second
table named, like, t1#i#05 (where "05" is the index number in t1).
This table has a fixed structure, no frm, not accessible directly,
doesn't go into the table cache, needs no MDLs.
MDEV-33406 basic optimizer support for k-NN searches
for a query like SELECT ... ORDER BY func() optimizer will use
item_func->part_of_sortkey() to decide what keys can be used
to resolve ORDER BY.
2024-01-17 15:32:45 +01:00
|
|
|
|
|
|
|
/* metadata are checked on open */
|
|
|
|
DBUG_ASSERT(graph);
|
|
|
|
DBUG_ASSERT(keyinfo->algorithm == HA_KEY_ALG_VECTOR);
|
|
|
|
DBUG_ASSERT(keyinfo->usable_key_parts == 1);
|
Initial HNSW implementation
This commit includes the work done in collaboration with Hugo Wen from
Amazon:
MDEV-33408 Alter HNSW graph storage and fix memory leak
This commit changes the way HNSW graph information is stored in the
second table. Instead of storing connections as separate records, it now
stores neighbors for each node, leading to significant performance
improvements and storage savings.
Comparing with the previous approach, the insert speed is 5 times faster,
search speed improves by 23%, and storage usage is reduced by 73%, based
on ann-benchmark tests with random-xs-20-euclidean and
random-s-100-euclidean datasets.
Additionally, in previous code, vector objects were not released after
use, resulting in excessive memory consumption (over 20GB for building
the index with 90,000 records), preventing tests with large datasets.
Now ensure that vectors are released appropriately during the insert and
search functions. Note there are still some vectors that need to be
cleaned up after search query completion. Needs to be addressed in a
future commit.
All new code of the whole pull request, including one or several files
that are either new files or modified ones, are contributed under the
BSD-new license. I am contributing on behalf of my employer Amazon Web
Services, Inc.
As well as the commit:
Introduce session variables to manage HNSW index parameters
Three variables:
hnsw_max_connection_per_layer
hnsw_ef_constructor
hnsw_ef_search
ann-benchmark tool is also updated to support these variables in commit
https://github.com/HugoWenTD/ann-benchmarks/commit/e09784e for branch
https://github.com/HugoWenTD/ann-benchmarks/tree/mariadb-configurable
All new code of the whole pull request, including one or several files
that are either new files or modified ones, are contributed under the
BSD-new license. I am contributing on behalf of my employer Amazon Web
Services, Inc.
Co-authored-by: Hugo Wen <wenhug@amazon.com>
2024-02-17 17:03:30 +02:00
|
|
|
DBUG_ASSERT(vec_field->binary());
|
|
|
|
DBUG_ASSERT(vec_field->cmp_type() == STRING_RESULT);
|
initial support for vector indexes
MDEV-33407 Parser support for vector indexes
The syntax is
create table t1 (... vector index (v) ...);
limitation:
* v is a binary string and NOT NULL
* only one vector index per table
* temporary tables are not supported
MDEV-33404 Engine-independent indexes: subtable method
added support for so-called "high level indexes", they are not visible
to the storage engine, implemented on the sql level. For every such
an index in a table, say, t1, the server implicitly creates a second
table named, like, t1#i#05 (where "05" is the index number in t1).
This table has a fixed structure, no frm, not accessible directly,
doesn't go into the table cache, needs no MDLs.
MDEV-33406 basic optimizer support for k-NN searches
for a query like SELECT ... ORDER BY func() optimizer will use
item_func->part_of_sortkey() to decide what keys can be used
to resolve ORDER BY.
2024-01-17 15:32:45 +01:00
|
|
|
DBUG_ASSERT(res); // ER_INDEX_CANNOT_HAVE_NULL
|
2024-06-13 23:24:51 +02:00
|
|
|
DBUG_ASSERT(table->file->ref_length <= graph->field[FIELD_TREF]->field_length);
|
initial support for vector indexes
MDEV-33407 Parser support for vector indexes
The syntax is
create table t1 (... vector index (v) ...);
limitation:
* v is a binary string and NOT NULL
* only one vector index per table
* temporary tables are not supported
MDEV-33404 Engine-independent indexes: subtable method
added support for so-called "high level indexes", they are not visible
to the storage engine, implemented on the sql level. For every such
an index in a table, say, t1, the server implicitly creates a second
table named, like, t1#i#05 (where "05" is the index number in t1).
This table has a fixed structure, no frm, not accessible directly,
doesn't go into the table cache, needs no MDLs.
MDEV-33406 basic optimizer support for k-NN searches
for a query like SELECT ... ORDER BY func() optimizer will use
item_func->part_of_sortkey() to decide what keys can be used
to resolve ORDER BY.
2024-01-17 15:32:45 +01:00
|
|
|
|
2024-06-01 00:17:05 +02:00
|
|
|
// XXX returning an error here will rollback the insert in InnoDB
|
|
|
|
// but in MyISAM the row will stay inserted, making the index out of sync:
|
|
|
|
// invalid vector values are present in the table but cannot be found
|
|
|
|
// via an index. The easiest way to fix it is with a VECTOR(N) type
|
initial support for vector indexes
MDEV-33407 Parser support for vector indexes
The syntax is
create table t1 (... vector index (v) ...);
limitation:
* v is a binary string and NOT NULL
* only one vector index per table
* temporary tables are not supported
MDEV-33404 Engine-independent indexes: subtable method
added support for so-called "high level indexes", they are not visible
to the storage engine, implemented on the sql level. For every such
an index in a table, say, t1, the server implicitly creates a second
table named, like, t1#i#05 (where "05" is the index number in t1).
This table has a fixed structure, no frm, not accessible directly,
doesn't go into the table cache, needs no MDLs.
MDEV-33406 basic optimizer support for k-NN searches
for a query like SELECT ... ORDER BY func() optimizer will use
item_func->part_of_sortkey() to decide what keys can be used
to resolve ORDER BY.
2024-01-17 15:32:45 +01:00
|
|
|
if (res->length() == 0 || res->length() % 4)
|
2024-06-01 00:17:05 +02:00
|
|
|
return bad_value_on_insert(vec_field);
|
initial support for vector indexes
MDEV-33407 Parser support for vector indexes
The syntax is
create table t1 (... vector index (v) ...);
limitation:
* v is a binary string and NOT NULL
* only one vector index per table
* temporary tables are not supported
MDEV-33404 Engine-independent indexes: subtable method
added support for so-called "high level indexes", they are not visible
to the storage engine, implemented on the sql level. For every such
an index in a table, say, t1, the server implicitly creates a second
table named, like, t1#i#05 (where "05" is the index number in t1).
This table has a fixed structure, no frm, not accessible directly,
doesn't go into the table cache, needs no MDLs.
MDEV-33406 basic optimizer support for k-NN searches
for a query like SELECT ... ORDER BY func() optimizer will use
item_func->part_of_sortkey() to decide what keys can be used
to resolve ORDER BY.
2024-01-17 15:32:45 +01:00
|
|
|
|
2024-06-08 11:03:08 +02:00
|
|
|
table->file->position(table->record[0]);
|
|
|
|
|
mhnsw: inter-statement shared cache
* preserve the graph in memory between statements
* keep it in a TABLE_SHARE, available for concurrent searches
* nodes are generally read-only, walking the graph doesn't change them
* distance to target is cached, calculated only once
* SIMD-optimized bloom filter detects visited nodes
* nodes are stored in an array, not List, to better utilize bloom filter
* auto-adjusting heuristic to estimate the number of visited nodes
(to configure the bloom filter)
* many threads can concurrently walk the graph. MEM_ROOT and Hash_set
are protected with a mutex, but walking doesn't need them
* up to 8 threads can concurrently load nodes into the cache,
nodes are partitioned into 8 mutexes (8 is chosen arbitrarily, might
need tuning)
* concurrent editing is not supported though
* this is fine for MyISAM, TL_WRITE protects the TABLE_SHARE and the
graph (note that TL_WRITE_CONCURRENT_INSERT is not allowed, because an
INSERT into the main table means multiple UPDATEs in the graph)
* InnoDB uses secondary transaction-level caches linked in a list in
in thd->ha_data via a fake handlerton
* on rollback the secondary cache is discarded, on commit nodes
from the secondary cache are invalidated in the shared cache
while it is exclusively locked
* on savepoint rollback both caches are flushed. this can be improved
in the future with a row visibility callback
* graph size is controlled by @@mhnsw_cache_size, the cache is flushed
when it reaches the threshold
2024-07-17 17:16:28 +02:00
|
|
|
int err= MHNSW_Context::acquire(&ctx, table, true);
|
|
|
|
SCOPE_EXIT([ctx, table](){ ctx->release(table); });
|
|
|
|
if (err)
|
initial support for vector indexes
MDEV-33407 Parser support for vector indexes
The syntax is
create table t1 (... vector index (v) ...);
limitation:
* v is a binary string and NOT NULL
* only one vector index per table
* temporary tables are not supported
MDEV-33404 Engine-independent indexes: subtable method
added support for so-called "high level indexes", they are not visible
to the storage engine, implemented on the sql level. For every such
an index in a table, say, t1, the server implicitly creates a second
table named, like, t1#i#05 (where "05" is the index number in t1).
This table has a fixed structure, no frm, not accessible directly,
doesn't go into the table cache, needs no MDLs.
MDEV-33406 basic optimizer support for k-NN searches
for a query like SELECT ... ORDER BY func() optimizer will use
item_func->part_of_sortkey() to decide what keys can be used
to resolve ORDER BY.
2024-01-17 15:32:45 +01:00
|
|
|
{
|
mhnsw: inter-statement shared cache
* preserve the graph in memory between statements
* keep it in a TABLE_SHARE, available for concurrent searches
* nodes are generally read-only, walking the graph doesn't change them
* distance to target is cached, calculated only once
* SIMD-optimized bloom filter detects visited nodes
* nodes are stored in an array, not List, to better utilize bloom filter
* auto-adjusting heuristic to estimate the number of visited nodes
(to configure the bloom filter)
* many threads can concurrently walk the graph. MEM_ROOT and Hash_set
are protected with a mutex, but walking doesn't need them
* up to 8 threads can concurrently load nodes into the cache,
nodes are partitioned into 8 mutexes (8 is chosen arbitrarily, might
need tuning)
* concurrent editing is not supported though
* this is fine for MyISAM, TL_WRITE protects the TABLE_SHARE and the
graph (note that TL_WRITE_CONCURRENT_INSERT is not allowed, because an
INSERT into the main table means multiple UPDATEs in the graph)
* InnoDB uses secondary transaction-level caches linked in a list in
in thd->ha_data via a fake handlerton
* on rollback the secondary cache is discarded, on commit nodes
from the secondary cache are invalidated in the shared cache
while it is exclusively locked
* on savepoint rollback both caches are flushed. this can be improved
in the future with a row visibility callback
* graph size is controlled by @@mhnsw_cache_size, the cache is flushed
when it reaches the threshold
2024-07-17 17:16:28 +02:00
|
|
|
if (err != HA_ERR_END_OF_FILE)
|
|
|
|
return err;
|
2024-06-01 00:17:05 +02:00
|
|
|
|
Initial HNSW implementation
This commit includes the work done in collaboration with Hugo Wen from
Amazon:
MDEV-33408 Alter HNSW graph storage and fix memory leak
This commit changes the way HNSW graph information is stored in the
second table. Instead of storing connections as separate records, it now
stores neighbors for each node, leading to significant performance
improvements and storage savings.
Comparing with the previous approach, the insert speed is 5 times faster,
search speed improves by 23%, and storage usage is reduced by 73%, based
on ann-benchmark tests with random-xs-20-euclidean and
random-s-100-euclidean datasets.
Additionally, in previous code, vector objects were not released after
use, resulting in excessive memory consumption (over 20GB for building
the index with 90,000 records), preventing tests with large datasets.
Now ensure that vectors are released appropriately during the insert and
search functions. Note there are still some vectors that need to be
cleaned up after search query completion. Needs to be addressed in a
future commit.
All new code of the whole pull request, including one or several files
that are either new files or modified ones, are contributed under the
BSD-new license. I am contributing on behalf of my employer Amazon Web
Services, Inc.
As well as the commit:
Introduce session variables to manage HNSW index parameters
Three variables:
hnsw_max_connection_per_layer
hnsw_ef_constructor
hnsw_ef_search
ann-benchmark tool is also updated to support these variables in commit
https://github.com/HugoWenTD/ann-benchmarks/commit/e09784e for branch
https://github.com/HugoWenTD/ann-benchmarks/tree/mariadb-configurable
All new code of the whole pull request, including one or several files
that are either new files or modified ones, are contributed under the
BSD-new license. I am contributing on behalf of my employer Amazon Web
Services, Inc.
Co-authored-by: Hugo Wen <wenhug@amazon.com>
2024-02-17 17:03:30 +02:00
|
|
|
// First insert!
|
mhnsw: inter-statement shared cache
* preserve the graph in memory between statements
* keep it in a TABLE_SHARE, available for concurrent searches
* nodes are generally read-only, walking the graph doesn't change them
* distance to target is cached, calculated only once
* SIMD-optimized bloom filter detects visited nodes
* nodes are stored in an array, not List, to better utilize bloom filter
* auto-adjusting heuristic to estimate the number of visited nodes
(to configure the bloom filter)
* many threads can concurrently walk the graph. MEM_ROOT and Hash_set
are protected with a mutex, but walking doesn't need them
* up to 8 threads can concurrently load nodes into the cache,
nodes are partitioned into 8 mutexes (8 is chosen arbitrarily, might
need tuning)
* concurrent editing is not supported though
* this is fine for MyISAM, TL_WRITE protects the TABLE_SHARE and the
graph (note that TL_WRITE_CONCURRENT_INSERT is not allowed, because an
INSERT into the main table means multiple UPDATEs in the graph)
* InnoDB uses secondary transaction-level caches linked in a list in
in thd->ha_data via a fake handlerton
* on rollback the secondary cache is discarded, on commit nodes
from the secondary cache are invalidated in the shared cache
while it is exclusively locked
* on savepoint rollback both caches are flushed. this can be improved
in the future with a row visibility callback
* graph size is controlled by @@mhnsw_cache_size, the cache is flushed
when it reaches the threshold
2024-07-17 17:16:28 +02:00
|
|
|
ctx->set_lengths(res->length());
|
|
|
|
FVectorNode *target= new (ctx->alloc_node())
|
|
|
|
FVectorNode(ctx, table->file->ref, 0, res->ptr());
|
|
|
|
if (!((err= target->save(graph))))
|
|
|
|
ctx->start= target;
|
|
|
|
return err;
|
initial support for vector indexes
MDEV-33407 Parser support for vector indexes
The syntax is
create table t1 (... vector index (v) ...);
limitation:
* v is a binary string and NOT NULL
* only one vector index per table
* temporary tables are not supported
MDEV-33404 Engine-independent indexes: subtable method
added support for so-called "high level indexes", they are not visible
to the storage engine, implemented on the sql level. For every such
an index in a table, say, t1, the server implicitly creates a second
table named, like, t1#i#05 (where "05" is the index number in t1).
This table has a fixed structure, no frm, not accessible directly,
doesn't go into the table cache, needs no MDLs.
MDEV-33406 basic optimizer support for k-NN searches
for a query like SELECT ... ORDER BY func() optimizer will use
item_func->part_of_sortkey() to decide what keys can be used
to resolve ORDER BY.
2024-01-17 15:32:45 +01:00
|
|
|
}
|
|
|
|
|
mhnsw: inter-statement shared cache
* preserve the graph in memory between statements
* keep it in a TABLE_SHARE, available for concurrent searches
* nodes are generally read-only, walking the graph doesn't change them
* distance to target is cached, calculated only once
* SIMD-optimized bloom filter detects visited nodes
* nodes are stored in an array, not List, to better utilize bloom filter
* auto-adjusting heuristic to estimate the number of visited nodes
(to configure the bloom filter)
* many threads can concurrently walk the graph. MEM_ROOT and Hash_set
are protected with a mutex, but walking doesn't need them
* up to 8 threads can concurrently load nodes into the cache,
nodes are partitioned into 8 mutexes (8 is chosen arbitrarily, might
need tuning)
* concurrent editing is not supported though
* this is fine for MyISAM, TL_WRITE protects the TABLE_SHARE and the
graph (note that TL_WRITE_CONCURRENT_INSERT is not allowed, because an
INSERT into the main table means multiple UPDATEs in the graph)
* InnoDB uses secondary transaction-level caches linked in a list in
in thd->ha_data via a fake handlerton
* on rollback the secondary cache is discarded, on commit nodes
from the secondary cache are invalidated in the shared cache
while it is exclusively locked
* on savepoint rollback both caches are flushed. this can be improved
in the future with a row visibility callback
* graph size is controlled by @@mhnsw_cache_size, the cache is flushed
when it reaches the threshold
2024-07-17 17:16:28 +02:00
|
|
|
if (ctx->byte_len != res->length())
|
|
|
|
return bad_value_on_insert(vec_field);
|
2024-06-01 00:17:05 +02:00
|
|
|
|
mhnsw: inter-statement shared cache
* preserve the graph in memory between statements
* keep it in a TABLE_SHARE, available for concurrent searches
* nodes are generally read-only, walking the graph doesn't change them
* distance to target is cached, calculated only once
* SIMD-optimized bloom filter detects visited nodes
* nodes are stored in an array, not List, to better utilize bloom filter
* auto-adjusting heuristic to estimate the number of visited nodes
(to configure the bloom filter)
* many threads can concurrently walk the graph. MEM_ROOT and Hash_set
are protected with a mutex, but walking doesn't need them
* up to 8 threads can concurrently load nodes into the cache,
nodes are partitioned into 8 mutexes (8 is chosen arbitrarily, might
need tuning)
* concurrent editing is not supported though
* this is fine for MyISAM, TL_WRITE protects the TABLE_SHARE and the
graph (note that TL_WRITE_CONCURRENT_INSERT is not allowed, because an
INSERT into the main table means multiple UPDATEs in the graph)
* InnoDB uses secondary transaction-level caches linked in a list in
in thd->ha_data via a fake handlerton
* on rollback the secondary cache is discarded, on commit nodes
from the secondary cache are invalidated in the shared cache
while it is exclusively locked
* on savepoint rollback both caches are flushed. this can be improved
in the future with a row visibility callback
* graph size is controlled by @@mhnsw_cache_size, the cache is flushed
when it reaches the threshold
2024-07-17 17:16:28 +02:00
|
|
|
MEM_ROOT_SAVEPOINT memroot_sv;
|
|
|
|
root_make_savepoint(thd->mem_root, &memroot_sv);
|
|
|
|
SCOPE_EXIT([memroot_sv](){ root_free_to_savepoint(&memroot_sv); });
|
2024-06-01 00:17:05 +02:00
|
|
|
|
2024-07-22 21:24:11 +02:00
|
|
|
const size_t max_found= ctx->max_neighbors(0);
|
mhnsw: inter-statement shared cache
* preserve the graph in memory between statements
* keep it in a TABLE_SHARE, available for concurrent searches
* nodes are generally read-only, walking the graph doesn't change them
* distance to target is cached, calculated only once
* SIMD-optimized bloom filter detects visited nodes
* nodes are stored in an array, not List, to better utilize bloom filter
* auto-adjusting heuristic to estimate the number of visited nodes
(to configure the bloom filter)
* many threads can concurrently walk the graph. MEM_ROOT and Hash_set
are protected with a mutex, but walking doesn't need them
* up to 8 threads can concurrently load nodes into the cache,
nodes are partitioned into 8 mutexes (8 is chosen arbitrarily, might
need tuning)
* concurrent editing is not supported though
* this is fine for MyISAM, TL_WRITE protects the TABLE_SHARE and the
graph (note that TL_WRITE_CONCURRENT_INSERT is not allowed, because an
INSERT into the main table means multiple UPDATEs in the graph)
* InnoDB uses secondary transaction-level caches linked in a list in
in thd->ha_data via a fake handlerton
* on rollback the secondary cache is discarded, on commit nodes
from the secondary cache are invalidated in the shared cache
while it is exclusively locked
* on savepoint rollback both caches are flushed. this can be improved
in the future with a row visibility callback
* graph size is controlled by @@mhnsw_cache_size, the cache is flushed
when it reaches the threshold
2024-07-17 17:16:28 +02:00
|
|
|
Neighborhood candidates, start_nodes;
|
2024-07-22 21:24:11 +02:00
|
|
|
candidates.init(thd->alloc<FVectorNode*>(max_found + 7), max_found);
|
|
|
|
start_nodes.init(thd->alloc<FVectorNode*>(max_found + 7), max_found);
|
mhnsw: inter-statement shared cache
* preserve the graph in memory between statements
* keep it in a TABLE_SHARE, available for concurrent searches
* nodes are generally read-only, walking the graph doesn't change them
* distance to target is cached, calculated only once
* SIMD-optimized bloom filter detects visited nodes
* nodes are stored in an array, not List, to better utilize bloom filter
* auto-adjusting heuristic to estimate the number of visited nodes
(to configure the bloom filter)
* many threads can concurrently walk the graph. MEM_ROOT and Hash_set
are protected with a mutex, but walking doesn't need them
* up to 8 threads can concurrently load nodes into the cache,
nodes are partitioned into 8 mutexes (8 is chosen arbitrarily, might
need tuning)
* concurrent editing is not supported though
* this is fine for MyISAM, TL_WRITE protects the TABLE_SHARE and the
graph (note that TL_WRITE_CONCURRENT_INSERT is not allowed, because an
INSERT into the main table means multiple UPDATEs in the graph)
* InnoDB uses secondary transaction-level caches linked in a list in
in thd->ha_data via a fake handlerton
* on rollback the secondary cache is discarded, on commit nodes
from the secondary cache are invalidated in the shared cache
while it is exclusively locked
* on savepoint rollback both caches are flushed. this can be improved
in the future with a row visibility callback
* graph size is controlled by @@mhnsw_cache_size, the cache is flushed
when it reaches the threshold
2024-07-17 17:16:28 +02:00
|
|
|
start_nodes.links[start_nodes.num++]= ctx->start;
|
initial support for vector indexes
MDEV-33407 Parser support for vector indexes
The syntax is
create table t1 (... vector index (v) ...);
limitation:
* v is a binary string and NOT NULL
* only one vector index per table
* temporary tables are not supported
MDEV-33404 Engine-independent indexes: subtable method
added support for so-called "high level indexes", they are not visible
to the storage engine, implemented on the sql level. For every such
an index in a table, say, t1, the server implicitly creates a second
table named, like, t1#i#05 (where "05" is the index number in t1).
This table has a fixed structure, no frm, not accessible directly,
doesn't go into the table cache, needs no MDLs.
MDEV-33406 basic optimizer support for k-NN searches
for a query like SELECT ... ORDER BY func() optimizer will use
item_func->part_of_sortkey() to decide what keys can be used
to resolve ORDER BY.
2024-01-17 15:32:45 +01:00
|
|
|
|
mhnsw: inter-statement shared cache
* preserve the graph in memory between statements
* keep it in a TABLE_SHARE, available for concurrent searches
* nodes are generally read-only, walking the graph doesn't change them
* distance to target is cached, calculated only once
* SIMD-optimized bloom filter detects visited nodes
* nodes are stored in an array, not List, to better utilize bloom filter
* auto-adjusting heuristic to estimate the number of visited nodes
(to configure the bloom filter)
* many threads can concurrently walk the graph. MEM_ROOT and Hash_set
are protected with a mutex, but walking doesn't need them
* up to 8 threads can concurrently load nodes into the cache,
nodes are partitioned into 8 mutexes (8 is chosen arbitrarily, might
need tuning)
* concurrent editing is not supported though
* this is fine for MyISAM, TL_WRITE protects the TABLE_SHARE and the
graph (note that TL_WRITE_CONCURRENT_INSERT is not allowed, because an
INSERT into the main table means multiple UPDATEs in the graph)
* InnoDB uses secondary transaction-level caches linked in a list in
in thd->ha_data via a fake handlerton
* on rollback the secondary cache is discarded, on commit nodes
from the secondary cache are invalidated in the shared cache
while it is exclusively locked
* on savepoint rollback both caches are flushed. this can be improved
in the future with a row visibility callback
* graph size is controlled by @@mhnsw_cache_size, the cache is flushed
when it reaches the threshold
2024-07-17 17:16:28 +02:00
|
|
|
const double NORMALIZATION_FACTOR= 1 / std::log(ctx->M);
|
|
|
|
double log= -std::log(my_rnd(&thd->rand)) * NORMALIZATION_FACTOR;
|
|
|
|
const uint8_t max_layer= start_nodes.links[0]->max_layer;
|
|
|
|
uint8_t target_layer= std::min<uint8_t>(static_cast<uint8_t>(std::floor(log)), max_layer + 1);
|
|
|
|
int cur_layer;
|
2024-06-01 00:17:05 +02:00
|
|
|
|
mhnsw: inter-statement shared cache
* preserve the graph in memory between statements
* keep it in a TABLE_SHARE, available for concurrent searches
* nodes are generally read-only, walking the graph doesn't change them
* distance to target is cached, calculated only once
* SIMD-optimized bloom filter detects visited nodes
* nodes are stored in an array, not List, to better utilize bloom filter
* auto-adjusting heuristic to estimate the number of visited nodes
(to configure the bloom filter)
* many threads can concurrently walk the graph. MEM_ROOT and Hash_set
are protected with a mutex, but walking doesn't need them
* up to 8 threads can concurrently load nodes into the cache,
nodes are partitioned into 8 mutexes (8 is chosen arbitrarily, might
need tuning)
* concurrent editing is not supported though
* this is fine for MyISAM, TL_WRITE protects the TABLE_SHARE and the
graph (note that TL_WRITE_CONCURRENT_INSERT is not allowed, because an
INSERT into the main table means multiple UPDATEs in the graph)
* InnoDB uses secondary transaction-level caches linked in a list in
in thd->ha_data via a fake handlerton
* on rollback the secondary cache is discarded, on commit nodes
from the secondary cache are invalidated in the shared cache
while it is exclusively locked
* on savepoint rollback both caches are flushed. this can be improved
in the future with a row visibility callback
* graph size is controlled by @@mhnsw_cache_size, the cache is flushed
when it reaches the threshold
2024-07-17 17:16:28 +02:00
|
|
|
FVectorNode *target= new (ctx->alloc_node())
|
|
|
|
FVectorNode(ctx, table->file->ref, target_layer, res->ptr());
|
2024-06-01 00:17:05 +02:00
|
|
|
|
2024-06-13 23:24:51 +02:00
|
|
|
if (int err= graph->file->ha_rnd_init(0))
|
|
|
|
return err;
|
|
|
|
SCOPE_EXIT([graph](){ graph->file->ha_rnd_end(); });
|
2024-06-01 00:17:05 +02:00
|
|
|
|
mhnsw: inter-statement shared cache
* preserve the graph in memory between statements
* keep it in a TABLE_SHARE, available for concurrent searches
* nodes are generally read-only, walking the graph doesn't change them
* distance to target is cached, calculated only once
* SIMD-optimized bloom filter detects visited nodes
* nodes are stored in an array, not List, to better utilize bloom filter
* auto-adjusting heuristic to estimate the number of visited nodes
(to configure the bloom filter)
* many threads can concurrently walk the graph. MEM_ROOT and Hash_set
are protected with a mutex, but walking doesn't need them
* up to 8 threads can concurrently load nodes into the cache,
nodes are partitioned into 8 mutexes (8 is chosen arbitrarily, might
need tuning)
* concurrent editing is not supported though
* this is fine for MyISAM, TL_WRITE protects the TABLE_SHARE and the
graph (note that TL_WRITE_CONCURRENT_INSERT is not allowed, because an
INSERT into the main table means multiple UPDATEs in the graph)
* InnoDB uses secondary transaction-level caches linked in a list in
in thd->ha_data via a fake handlerton
* on rollback the secondary cache is discarded, on commit nodes
from the secondary cache are invalidated in the shared cache
while it is exclusively locked
* on savepoint rollback both caches are flushed. this can be improved
in the future with a row visibility callback
* graph size is controlled by @@mhnsw_cache_size, the cache is flushed
when it reaches the threshold
2024-07-17 17:16:28 +02:00
|
|
|
for (cur_layer= max_layer; cur_layer > target_layer; cur_layer--)
|
2024-06-05 13:39:33 +02:00
|
|
|
{
|
2024-07-19 12:25:25 +02:00
|
|
|
if (int err= search_layer(ctx, graph, target->vec, &start_nodes, 1,
|
|
|
|
cur_layer, &candidates, false))
|
mhnsw: inter-statement shared cache
* preserve the graph in memory between statements
* keep it in a TABLE_SHARE, available for concurrent searches
* nodes are generally read-only, walking the graph doesn't change them
* distance to target is cached, calculated only once
* SIMD-optimized bloom filter detects visited nodes
* nodes are stored in an array, not List, to better utilize bloom filter
* auto-adjusting heuristic to estimate the number of visited nodes
(to configure the bloom filter)
* many threads can concurrently walk the graph. MEM_ROOT and Hash_set
are protected with a mutex, but walking doesn't need them
* up to 8 threads can concurrently load nodes into the cache,
nodes are partitioned into 8 mutexes (8 is chosen arbitrarily, might
need tuning)
* concurrent editing is not supported though
* this is fine for MyISAM, TL_WRITE protects the TABLE_SHARE and the
graph (note that TL_WRITE_CONCURRENT_INSERT is not allowed, because an
INSERT into the main table means multiple UPDATEs in the graph)
* InnoDB uses secondary transaction-level caches linked in a list in
in thd->ha_data via a fake handlerton
* on rollback the secondary cache is discarded, on commit nodes
from the secondary cache are invalidated in the shared cache
while it is exclusively locked
* on savepoint rollback both caches are flushed. this can be improved
in the future with a row visibility callback
* graph size is controlled by @@mhnsw_cache_size, the cache is flushed
when it reaches the threshold
2024-07-17 17:16:28 +02:00
|
|
|
return err;
|
|
|
|
std::swap(start_nodes, candidates);
|
Initial HNSW implementation
This commit includes the work done in collaboration with Hugo Wen from
Amazon:
MDEV-33408 Alter HNSW graph storage and fix memory leak
This commit changes the way HNSW graph information is stored in the
second table. Instead of storing connections as separate records, it now
stores neighbors for each node, leading to significant performance
improvements and storage savings.
Comparing with the previous approach, the insert speed is 5 times faster,
search speed improves by 23%, and storage usage is reduced by 73%, based
on ann-benchmark tests with random-xs-20-euclidean and
random-s-100-euclidean datasets.
Additionally, in previous code, vector objects were not released after
use, resulting in excessive memory consumption (over 20GB for building
the index with 90,000 records), preventing tests with large datasets.
Now ensure that vectors are released appropriately during the insert and
search functions. Note there are still some vectors that need to be
cleaned up after search query completion. Needs to be addressed in a
future commit.
All new code of the whole pull request, including one or several files
that are either new files or modified ones, are contributed under the
BSD-new license. I am contributing on behalf of my employer Amazon Web
Services, Inc.
As well as the commit:
Introduce session variables to manage HNSW index parameters
Three variables:
hnsw_max_connection_per_layer
hnsw_ef_constructor
hnsw_ef_search
ann-benchmark tool is also updated to support these variables in commit
https://github.com/HugoWenTD/ann-benchmarks/commit/e09784e for branch
https://github.com/HugoWenTD/ann-benchmarks/tree/mariadb-configurable
All new code of the whole pull request, including one or several files
that are either new files or modified ones, are contributed under the
BSD-new license. I am contributing on behalf of my employer Amazon Web
Services, Inc.
Co-authored-by: Hugo Wen <wenhug@amazon.com>
2024-02-17 17:03:30 +02:00
|
|
|
}
|
initial support for vector indexes
MDEV-33407 Parser support for vector indexes
The syntax is
create table t1 (... vector index (v) ...);
limitation:
* v is a binary string and NOT NULL
* only one vector index per table
* temporary tables are not supported
MDEV-33404 Engine-independent indexes: subtable method
added support for so-called "high level indexes", they are not visible
to the storage engine, implemented on the sql level. For every such
an index in a table, say, t1, the server implicitly creates a second
table named, like, t1#i#05 (where "05" is the index number in t1).
This table has a fixed structure, no frm, not accessible directly,
doesn't go into the table cache, needs no MDLs.
MDEV-33406 basic optimizer support for k-NN searches
for a query like SELECT ... ORDER BY func() optimizer will use
item_func->part_of_sortkey() to decide what keys can be used
to resolve ORDER BY.
2024-01-17 15:32:45 +01:00
|
|
|
|
2024-06-13 23:24:51 +02:00
|
|
|
for (; cur_layer >= 0; cur_layer--)
|
Initial HNSW implementation
This commit includes the work done in collaboration with Hugo Wen from
Amazon:
MDEV-33408 Alter HNSW graph storage and fix memory leak
This commit changes the way HNSW graph information is stored in the
second table. Instead of storing connections as separate records, it now
stores neighbors for each node, leading to significant performance
improvements and storage savings.
Comparing with the previous approach, the insert speed is 5 times faster,
search speed improves by 23%, and storage usage is reduced by 73%, based
on ann-benchmark tests with random-xs-20-euclidean and
random-s-100-euclidean datasets.
Additionally, in previous code, vector objects were not released after
use, resulting in excessive memory consumption (over 20GB for building
the index with 90,000 records), preventing tests with large datasets.
Now ensure that vectors are released appropriately during the insert and
search functions. Note there are still some vectors that need to be
cleaned up after search query completion. Needs to be addressed in a
future commit.
All new code of the whole pull request, including one or several files
that are either new files or modified ones, are contributed under the
BSD-new license. I am contributing on behalf of my employer Amazon Web
Services, Inc.
As well as the commit:
Introduce session variables to manage HNSW index parameters
Three variables:
hnsw_max_connection_per_layer
hnsw_ef_constructor
hnsw_ef_search
ann-benchmark tool is also updated to support these variables in commit
https://github.com/HugoWenTD/ann-benchmarks/commit/e09784e for branch
https://github.com/HugoWenTD/ann-benchmarks/tree/mariadb-configurable
All new code of the whole pull request, including one or several files
that are either new files or modified ones, are contributed under the
BSD-new license. I am contributing on behalf of my employer Amazon Web
Services, Inc.
Co-authored-by: Hugo Wen <wenhug@amazon.com>
2024-02-17 17:03:30 +02:00
|
|
|
{
|
mhnsw: inter-statement shared cache
* preserve the graph in memory between statements
* keep it in a TABLE_SHARE, available for concurrent searches
* nodes are generally read-only, walking the graph doesn't change them
* distance to target is cached, calculated only once
* SIMD-optimized bloom filter detects visited nodes
* nodes are stored in an array, not List, to better utilize bloom filter
* auto-adjusting heuristic to estimate the number of visited nodes
(to configure the bloom filter)
* many threads can concurrently walk the graph. MEM_ROOT and Hash_set
are protected with a mutex, but walking doesn't need them
* up to 8 threads can concurrently load nodes into the cache,
nodes are partitioned into 8 mutexes (8 is chosen arbitrarily, might
need tuning)
* concurrent editing is not supported though
* this is fine for MyISAM, TL_WRITE protects the TABLE_SHARE and the
graph (note that TL_WRITE_CONCURRENT_INSERT is not allowed, because an
INSERT into the main table means multiple UPDATEs in the graph)
* InnoDB uses secondary transaction-level caches linked in a list in
in thd->ha_data via a fake handlerton
* on rollback the secondary cache is discarded, on commit nodes
from the secondary cache are invalidated in the shared cache
while it is exclusively locked
* on savepoint rollback both caches are flushed. this can be improved
in the future with a row visibility callback
* graph size is controlled by @@mhnsw_cache_size, the cache is flushed
when it reaches the threshold
2024-07-17 17:16:28 +02:00
|
|
|
uint max_neighbors= ctx->max_neighbors(cur_layer);
|
2024-07-19 12:25:25 +02:00
|
|
|
if (int err= search_layer(ctx, graph, target->vec, &start_nodes,
|
2024-07-22 21:24:11 +02:00
|
|
|
max_neighbors, cur_layer, &candidates, true))
|
mhnsw: inter-statement shared cache
* preserve the graph in memory between statements
* keep it in a TABLE_SHARE, available for concurrent searches
* nodes are generally read-only, walking the graph doesn't change them
* distance to target is cached, calculated only once
* SIMD-optimized bloom filter detects visited nodes
* nodes are stored in an array, not List, to better utilize bloom filter
* auto-adjusting heuristic to estimate the number of visited nodes
(to configure the bloom filter)
* many threads can concurrently walk the graph. MEM_ROOT and Hash_set
are protected with a mutex, but walking doesn't need them
* up to 8 threads can concurrently load nodes into the cache,
nodes are partitioned into 8 mutexes (8 is chosen arbitrarily, might
need tuning)
* concurrent editing is not supported though
* this is fine for MyISAM, TL_WRITE protects the TABLE_SHARE and the
graph (note that TL_WRITE_CONCURRENT_INSERT is not allowed, because an
INSERT into the main table means multiple UPDATEs in the graph)
* InnoDB uses secondary transaction-level caches linked in a list in
in thd->ha_data via a fake handlerton
* on rollback the secondary cache is discarded, on commit nodes
from the secondary cache are invalidated in the shared cache
while it is exclusively locked
* on savepoint rollback both caches are flushed. this can be improved
in the future with a row visibility callback
* graph size is controlled by @@mhnsw_cache_size, the cache is flushed
when it reaches the threshold
2024-07-17 17:16:28 +02:00
|
|
|
return err;
|
|
|
|
|
|
|
|
if (int err= select_neighbors(ctx, graph, cur_layer, *target, candidates,
|
|
|
|
0, max_neighbors))
|
|
|
|
return err;
|
|
|
|
std::swap(start_nodes, candidates);
|
Initial HNSW implementation
This commit includes the work done in collaboration with Hugo Wen from
Amazon:
MDEV-33408 Alter HNSW graph storage and fix memory leak
This commit changes the way HNSW graph information is stored in the
second table. Instead of storing connections as separate records, it now
stores neighbors for each node, leading to significant performance
improvements and storage savings.
Comparing with the previous approach, the insert speed is 5 times faster,
search speed improves by 23%, and storage usage is reduced by 73%, based
on ann-benchmark tests with random-xs-20-euclidean and
random-s-100-euclidean datasets.
Additionally, in previous code, vector objects were not released after
use, resulting in excessive memory consumption (over 20GB for building
the index with 90,000 records), preventing tests with large datasets.
Now ensure that vectors are released appropriately during the insert and
search functions. Note there are still some vectors that need to be
cleaned up after search query completion. Needs to be addressed in a
future commit.
All new code of the whole pull request, including one or several files
that are either new files or modified ones, are contributed under the
BSD-new license. I am contributing on behalf of my employer Amazon Web
Services, Inc.
As well as the commit:
Introduce session variables to manage HNSW index parameters
Three variables:
hnsw_max_connection_per_layer
hnsw_ef_constructor
hnsw_ef_search
ann-benchmark tool is also updated to support these variables in commit
https://github.com/HugoWenTD/ann-benchmarks/commit/e09784e for branch
https://github.com/HugoWenTD/ann-benchmarks/tree/mariadb-configurable
All new code of the whole pull request, including one or several files
that are either new files or modified ones, are contributed under the
BSD-new license. I am contributing on behalf of my employer Amazon Web
Services, Inc.
Co-authored-by: Hugo Wen <wenhug@amazon.com>
2024-02-17 17:03:30 +02:00
|
|
|
}
|
initial support for vector indexes
MDEV-33407 Parser support for vector indexes
The syntax is
create table t1 (... vector index (v) ...);
limitation:
* v is a binary string and NOT NULL
* only one vector index per table
* temporary tables are not supported
MDEV-33404 Engine-independent indexes: subtable method
added support for so-called "high level indexes", they are not visible
to the storage engine, implemented on the sql level. For every such
an index in a table, say, t1, the server implicitly creates a second
table named, like, t1#i#05 (where "05" is the index number in t1).
This table has a fixed structure, no frm, not accessible directly,
doesn't go into the table cache, needs no MDLs.
MDEV-33406 basic optimizer support for k-NN searches
for a query like SELECT ... ORDER BY func() optimizer will use
item_func->part_of_sortkey() to decide what keys can be used
to resolve ORDER BY.
2024-01-17 15:32:45 +01:00
|
|
|
|
mhnsw: inter-statement shared cache
* preserve the graph in memory between statements
* keep it in a TABLE_SHARE, available for concurrent searches
* nodes are generally read-only, walking the graph doesn't change them
* distance to target is cached, calculated only once
* SIMD-optimized bloom filter detects visited nodes
* nodes are stored in an array, not List, to better utilize bloom filter
* auto-adjusting heuristic to estimate the number of visited nodes
(to configure the bloom filter)
* many threads can concurrently walk the graph. MEM_ROOT and Hash_set
are protected with a mutex, but walking doesn't need them
* up to 8 threads can concurrently load nodes into the cache,
nodes are partitioned into 8 mutexes (8 is chosen arbitrarily, might
need tuning)
* concurrent editing is not supported though
* this is fine for MyISAM, TL_WRITE protects the TABLE_SHARE and the
graph (note that TL_WRITE_CONCURRENT_INSERT is not allowed, because an
INSERT into the main table means multiple UPDATEs in the graph)
* InnoDB uses secondary transaction-level caches linked in a list in
in thd->ha_data via a fake handlerton
* on rollback the secondary cache is discarded, on commit nodes
from the secondary cache are invalidated in the shared cache
while it is exclusively locked
* on savepoint rollback both caches are flushed. this can be improved
in the future with a row visibility callback
* graph size is controlled by @@mhnsw_cache_size, the cache is flushed
when it reaches the threshold
2024-07-17 17:16:28 +02:00
|
|
|
if (int err= target->save(graph))
|
|
|
|
return err;
|
2024-06-13 23:24:51 +02:00
|
|
|
|
mhnsw: inter-statement shared cache
* preserve the graph in memory between statements
* keep it in a TABLE_SHARE, available for concurrent searches
* nodes are generally read-only, walking the graph doesn't change them
* distance to target is cached, calculated only once
* SIMD-optimized bloom filter detects visited nodes
* nodes are stored in an array, not List, to better utilize bloom filter
* auto-adjusting heuristic to estimate the number of visited nodes
(to configure the bloom filter)
* many threads can concurrently walk the graph. MEM_ROOT and Hash_set
are protected with a mutex, but walking doesn't need them
* up to 8 threads can concurrently load nodes into the cache,
nodes are partitioned into 8 mutexes (8 is chosen arbitrarily, might
need tuning)
* concurrent editing is not supported though
* this is fine for MyISAM, TL_WRITE protects the TABLE_SHARE and the
graph (note that TL_WRITE_CONCURRENT_INSERT is not allowed, because an
INSERT into the main table means multiple UPDATEs in the graph)
* InnoDB uses secondary transaction-level caches linked in a list in
in thd->ha_data via a fake handlerton
* on rollback the secondary cache is discarded, on commit nodes
from the secondary cache are invalidated in the shared cache
while it is exclusively locked
* on savepoint rollback both caches are flushed. this can be improved
in the future with a row visibility callback
* graph size is controlled by @@mhnsw_cache_size, the cache is flushed
when it reaches the threshold
2024-07-17 17:16:28 +02:00
|
|
|
if (target_layer > max_layer)
|
|
|
|
ctx->start= target;
|
|
|
|
|
|
|
|
for (cur_layer= target_layer; cur_layer >= 0; cur_layer--)
|
2024-06-13 23:24:51 +02:00
|
|
|
{
|
mhnsw: inter-statement shared cache
* preserve the graph in memory between statements
* keep it in a TABLE_SHARE, available for concurrent searches
* nodes are generally read-only, walking the graph doesn't change them
* distance to target is cached, calculated only once
* SIMD-optimized bloom filter detects visited nodes
* nodes are stored in an array, not List, to better utilize bloom filter
* auto-adjusting heuristic to estimate the number of visited nodes
(to configure the bloom filter)
* many threads can concurrently walk the graph. MEM_ROOT and Hash_set
are protected with a mutex, but walking doesn't need them
* up to 8 threads can concurrently load nodes into the cache,
nodes are partitioned into 8 mutexes (8 is chosen arbitrarily, might
need tuning)
* concurrent editing is not supported though
* this is fine for MyISAM, TL_WRITE protects the TABLE_SHARE and the
graph (note that TL_WRITE_CONCURRENT_INSERT is not allowed, because an
INSERT into the main table means multiple UPDATEs in the graph)
* InnoDB uses secondary transaction-level caches linked in a list in
in thd->ha_data via a fake handlerton
* on rollback the secondary cache is discarded, on commit nodes
from the secondary cache are invalidated in the shared cache
while it is exclusively locked
* on savepoint rollback both caches are flushed. this can be improved
in the future with a row visibility callback
* graph size is controlled by @@mhnsw_cache_size, the cache is flushed
when it reaches the threshold
2024-07-17 17:16:28 +02:00
|
|
|
if (int err= update_second_degree_neighbors(ctx, graph, cur_layer, target))
|
|
|
|
return err;
|
2024-06-13 23:24:51 +02:00
|
|
|
}
|
|
|
|
|
Initial HNSW implementation
This commit includes the work done in collaboration with Hugo Wen from
Amazon:
MDEV-33408 Alter HNSW graph storage and fix memory leak
This commit changes the way HNSW graph information is stored in the
second table. Instead of storing connections as separate records, it now
stores neighbors for each node, leading to significant performance
improvements and storage savings.
Comparing with the previous approach, the insert speed is 5 times faster,
search speed improves by 23%, and storage usage is reduced by 73%, based
on ann-benchmark tests with random-xs-20-euclidean and
random-s-100-euclidean datasets.
Additionally, in previous code, vector objects were not released after
use, resulting in excessive memory consumption (over 20GB for building
the index with 90,000 records), preventing tests with large datasets.
Now ensure that vectors are released appropriately during the insert and
search functions. Note there are still some vectors that need to be
cleaned up after search query completion. Needs to be addressed in a
future commit.
All new code of the whole pull request, including one or several files
that are either new files or modified ones, are contributed under the
BSD-new license. I am contributing on behalf of my employer Amazon Web
Services, Inc.
As well as the commit:
Introduce session variables to manage HNSW index parameters
Three variables:
hnsw_max_connection_per_layer
hnsw_ef_constructor
hnsw_ef_search
ann-benchmark tool is also updated to support these variables in commit
https://github.com/HugoWenTD/ann-benchmarks/commit/e09784e for branch
https://github.com/HugoWenTD/ann-benchmarks/tree/mariadb-configurable
All new code of the whole pull request, including one or several files
that are either new files or modified ones, are contributed under the
BSD-new license. I am contributing on behalf of my employer Amazon Web
Services, Inc.
Co-authored-by: Hugo Wen <wenhug@amazon.com>
2024-02-17 17:03:30 +02:00
|
|
|
dbug_tmp_restore_column_map(&table->read_set, old_map);
|
|
|
|
|
2024-06-01 00:17:05 +02:00
|
|
|
return 0;
|
Initial HNSW implementation
This commit includes the work done in collaboration with Hugo Wen from
Amazon:
MDEV-33408 Alter HNSW graph storage and fix memory leak
This commit changes the way HNSW graph information is stored in the
second table. Instead of storing connections as separate records, it now
stores neighbors for each node, leading to significant performance
improvements and storage savings.
Comparing with the previous approach, the insert speed is 5 times faster,
search speed improves by 23%, and storage usage is reduced by 73%, based
on ann-benchmark tests with random-xs-20-euclidean and
random-s-100-euclidean datasets.
Additionally, in previous code, vector objects were not released after
use, resulting in excessive memory consumption (over 20GB for building
the index with 90,000 records), preventing tests with large datasets.
Now ensure that vectors are released appropriately during the insert and
search functions. Note there are still some vectors that need to be
cleaned up after search query completion. Needs to be addressed in a
future commit.
All new code of the whole pull request, including one or several files
that are either new files or modified ones, are contributed under the
BSD-new license. I am contributing on behalf of my employer Amazon Web
Services, Inc.
As well as the commit:
Introduce session variables to manage HNSW index parameters
Three variables:
hnsw_max_connection_per_layer
hnsw_ef_constructor
hnsw_ef_search
ann-benchmark tool is also updated to support these variables in commit
https://github.com/HugoWenTD/ann-benchmarks/commit/e09784e for branch
https://github.com/HugoWenTD/ann-benchmarks/tree/mariadb-configurable
All new code of the whole pull request, including one or several files
that are either new files or modified ones, are contributed under the
BSD-new license. I am contributing on behalf of my employer Amazon Web
Services, Inc.
Co-authored-by: Hugo Wen <wenhug@amazon.com>
2024-02-17 17:03:30 +02:00
|
|
|
}
|
|
|
|
|
initial support for vector indexes
MDEV-33407 Parser support for vector indexes
The syntax is
create table t1 (... vector index (v) ...);
limitation:
* v is a binary string and NOT NULL
* only one vector index per table
* temporary tables are not supported
MDEV-33404 Engine-independent indexes: subtable method
added support for so-called "high level indexes", they are not visible
to the storage engine, implemented on the sql level. For every such
an index in a table, say, t1, the server implicitly creates a second
table named, like, t1#i#05 (where "05" is the index number in t1).
This table has a fixed structure, no frm, not accessible directly,
doesn't go into the table cache, needs no MDLs.
MDEV-33406 basic optimizer support for k-NN searches
for a query like SELECT ... ORDER BY func() optimizer will use
item_func->part_of_sortkey() to decide what keys can be used
to resolve ORDER BY.
2024-01-17 15:32:45 +01:00
|
|
|
|
Initial HNSW implementation
This commit includes the work done in collaboration with Hugo Wen from
Amazon:
MDEV-33408 Alter HNSW graph storage and fix memory leak
This commit changes the way HNSW graph information is stored in the
second table. Instead of storing connections as separate records, it now
stores neighbors for each node, leading to significant performance
improvements and storage savings.
Comparing with the previous approach, the insert speed is 5 times faster,
search speed improves by 23%, and storage usage is reduced by 73%, based
on ann-benchmark tests with random-xs-20-euclidean and
random-s-100-euclidean datasets.
Additionally, in previous code, vector objects were not released after
use, resulting in excessive memory consumption (over 20GB for building
the index with 90,000 records), preventing tests with large datasets.
Now ensure that vectors are released appropriately during the insert and
search functions. Note there are still some vectors that need to be
cleaned up after search query completion. Needs to be addressed in a
future commit.
All new code of the whole pull request, including one or several files
that are either new files or modified ones, are contributed under the
BSD-new license. I am contributing on behalf of my employer Amazon Web
Services, Inc.
As well as the commit:
Introduce session variables to manage HNSW index parameters
Three variables:
hnsw_max_connection_per_layer
hnsw_ef_constructor
hnsw_ef_search
ann-benchmark tool is also updated to support these variables in commit
https://github.com/HugoWenTD/ann-benchmarks/commit/e09784e for branch
https://github.com/HugoWenTD/ann-benchmarks/tree/mariadb-configurable
All new code of the whole pull request, including one or several files
that are either new files or modified ones, are contributed under the
BSD-new license. I am contributing on behalf of my employer Amazon Web
Services, Inc.
Co-authored-by: Hugo Wen <wenhug@amazon.com>
2024-02-17 17:03:30 +02:00
|
|
|
int mhnsw_read_first(TABLE *table, KEY *keyinfo, Item *dist, ulonglong limit)
|
|
|
|
{
|
2024-06-01 00:17:05 +02:00
|
|
|
THD *thd= table->in_use;
|
Initial HNSW implementation
This commit includes the work done in collaboration with Hugo Wen from
Amazon:
MDEV-33408 Alter HNSW graph storage and fix memory leak
This commit changes the way HNSW graph information is stored in the
second table. Instead of storing connections as separate records, it now
stores neighbors for each node, leading to significant performance
improvements and storage savings.
Comparing with the previous approach, the insert speed is 5 times faster,
search speed improves by 23%, and storage usage is reduced by 73%, based
on ann-benchmark tests with random-xs-20-euclidean and
random-s-100-euclidean datasets.
Additionally, in previous code, vector objects were not released after
use, resulting in excessive memory consumption (over 20GB for building
the index with 90,000 records), preventing tests with large datasets.
Now ensure that vectors are released appropriately during the insert and
search functions. Note there are still some vectors that need to be
cleaned up after search query completion. Needs to be addressed in a
future commit.
All new code of the whole pull request, including one or several files
that are either new files or modified ones, are contributed under the
BSD-new license. I am contributing on behalf of my employer Amazon Web
Services, Inc.
As well as the commit:
Introduce session variables to manage HNSW index parameters
Three variables:
hnsw_max_connection_per_layer
hnsw_ef_constructor
hnsw_ef_search
ann-benchmark tool is also updated to support these variables in commit
https://github.com/HugoWenTD/ann-benchmarks/commit/e09784e for branch
https://github.com/HugoWenTD/ann-benchmarks/tree/mariadb-configurable
All new code of the whole pull request, including one or several files
that are either new files or modified ones, are contributed under the
BSD-new license. I am contributing on behalf of my employer Amazon Web
Services, Inc.
Co-authored-by: Hugo Wen <wenhug@amazon.com>
2024-02-17 17:03:30 +02:00
|
|
|
TABLE *graph= table->hlindex;
|
|
|
|
Item_func_vec_distance *fun= (Item_func_vec_distance *)dist;
|
2024-06-01 00:17:05 +02:00
|
|
|
String buf, *res= fun->get_const_arg()->val_str(&buf);
|
mhnsw: inter-statement shared cache
* preserve the graph in memory between statements
* keep it in a TABLE_SHARE, available for concurrent searches
* nodes are generally read-only, walking the graph doesn't change them
* distance to target is cached, calculated only once
* SIMD-optimized bloom filter detects visited nodes
* nodes are stored in an array, not List, to better utilize bloom filter
* auto-adjusting heuristic to estimate the number of visited nodes
(to configure the bloom filter)
* many threads can concurrently walk the graph. MEM_ROOT and Hash_set
are protected with a mutex, but walking doesn't need them
* up to 8 threads can concurrently load nodes into the cache,
nodes are partitioned into 8 mutexes (8 is chosen arbitrarily, might
need tuning)
* concurrent editing is not supported though
* this is fine for MyISAM, TL_WRITE protects the TABLE_SHARE and the
graph (note that TL_WRITE_CONCURRENT_INSERT is not allowed, because an
INSERT into the main table means multiple UPDATEs in the graph)
* InnoDB uses secondary transaction-level caches linked in a list in
in thd->ha_data via a fake handlerton
* on rollback the secondary cache is discarded, on commit nodes
from the secondary cache are invalidated in the shared cache
while it is exclusively locked
* on savepoint rollback both caches are flushed. this can be improved
in the future with a row visibility callback
* graph size is controlled by @@mhnsw_cache_size, the cache is flushed
when it reaches the threshold
2024-07-17 17:16:28 +02:00
|
|
|
MHNSW_Context *ctx;
|
initial support for vector indexes
MDEV-33407 Parser support for vector indexes
The syntax is
create table t1 (... vector index (v) ...);
limitation:
* v is a binary string and NOT NULL
* only one vector index per table
* temporary tables are not supported
MDEV-33404 Engine-independent indexes: subtable method
added support for so-called "high level indexes", they are not visible
to the storage engine, implemented on the sql level. For every such
an index in a table, say, t1, the server implicitly creates a second
table named, like, t1#i#05 (where "05" is the index number in t1).
This table has a fixed structure, no frm, not accessible directly,
doesn't go into the table cache, needs no MDLs.
MDEV-33406 basic optimizer support for k-NN searches
for a query like SELECT ... ORDER BY func() optimizer will use
item_func->part_of_sortkey() to decide what keys can be used
to resolve ORDER BY.
2024-01-17 15:32:45 +01:00
|
|
|
|
mhnsw: inter-statement shared cache
* preserve the graph in memory between statements
* keep it in a TABLE_SHARE, available for concurrent searches
* nodes are generally read-only, walking the graph doesn't change them
* distance to target is cached, calculated only once
* SIMD-optimized bloom filter detects visited nodes
* nodes are stored in an array, not List, to better utilize bloom filter
* auto-adjusting heuristic to estimate the number of visited nodes
(to configure the bloom filter)
* many threads can concurrently walk the graph. MEM_ROOT and Hash_set
are protected with a mutex, but walking doesn't need them
* up to 8 threads can concurrently load nodes into the cache,
nodes are partitioned into 8 mutexes (8 is chosen arbitrarily, might
need tuning)
* concurrent editing is not supported though
* this is fine for MyISAM, TL_WRITE protects the TABLE_SHARE and the
graph (note that TL_WRITE_CONCURRENT_INSERT is not allowed, because an
INSERT into the main table means multiple UPDATEs in the graph)
* InnoDB uses secondary transaction-level caches linked in a list in
in thd->ha_data via a fake handlerton
* on rollback the secondary cache is discarded, on commit nodes
from the secondary cache are invalidated in the shared cache
while it is exclusively locked
* on savepoint rollback both caches are flushed. this can be improved
in the future with a row visibility callback
* graph size is controlled by @@mhnsw_cache_size, the cache is flushed
when it reaches the threshold
2024-07-17 17:16:28 +02:00
|
|
|
if (int err= table->file->ha_rnd_init(0))
|
Initial HNSW implementation
This commit includes the work done in collaboration with Hugo Wen from
Amazon:
MDEV-33408 Alter HNSW graph storage and fix memory leak
This commit changes the way HNSW graph information is stored in the
second table. Instead of storing connections as separate records, it now
stores neighbors for each node, leading to significant performance
improvements and storage savings.
Comparing with the previous approach, the insert speed is 5 times faster,
search speed improves by 23%, and storage usage is reduced by 73%, based
on ann-benchmark tests with random-xs-20-euclidean and
random-s-100-euclidean datasets.
Additionally, in previous code, vector objects were not released after
use, resulting in excessive memory consumption (over 20GB for building
the index with 90,000 records), preventing tests with large datasets.
Now ensure that vectors are released appropriately during the insert and
search functions. Note there are still some vectors that need to be
cleaned up after search query completion. Needs to be addressed in a
future commit.
All new code of the whole pull request, including one or several files
that are either new files or modified ones, are contributed under the
BSD-new license. I am contributing on behalf of my employer Amazon Web
Services, Inc.
As well as the commit:
Introduce session variables to manage HNSW index parameters
Three variables:
hnsw_max_connection_per_layer
hnsw_ef_constructor
hnsw_ef_search
ann-benchmark tool is also updated to support these variables in commit
https://github.com/HugoWenTD/ann-benchmarks/commit/e09784e for branch
https://github.com/HugoWenTD/ann-benchmarks/tree/mariadb-configurable
All new code of the whole pull request, including one or several files
that are either new files or modified ones, are contributed under the
BSD-new license. I am contributing on behalf of my employer Amazon Web
Services, Inc.
Co-authored-by: Hugo Wen <wenhug@amazon.com>
2024-02-17 17:03:30 +02:00
|
|
|
return err;
|
initial support for vector indexes
MDEV-33407 Parser support for vector indexes
The syntax is
create table t1 (... vector index (v) ...);
limitation:
* v is a binary string and NOT NULL
* only one vector index per table
* temporary tables are not supported
MDEV-33404 Engine-independent indexes: subtable method
added support for so-called "high level indexes", they are not visible
to the storage engine, implemented on the sql level. For every such
an index in a table, say, t1, the server implicitly creates a second
table named, like, t1#i#05 (where "05" is the index number in t1).
This table has a fixed structure, no frm, not accessible directly,
doesn't go into the table cache, needs no MDLs.
MDEV-33406 basic optimizer support for k-NN searches
for a query like SELECT ... ORDER BY func() optimizer will use
item_func->part_of_sortkey() to decide what keys can be used
to resolve ORDER BY.
2024-01-17 15:32:45 +01:00
|
|
|
|
mhnsw: inter-statement shared cache
* preserve the graph in memory between statements
* keep it in a TABLE_SHARE, available for concurrent searches
* nodes are generally read-only, walking the graph doesn't change them
* distance to target is cached, calculated only once
* SIMD-optimized bloom filter detects visited nodes
* nodes are stored in an array, not List, to better utilize bloom filter
* auto-adjusting heuristic to estimate the number of visited nodes
(to configure the bloom filter)
* many threads can concurrently walk the graph. MEM_ROOT and Hash_set
are protected with a mutex, but walking doesn't need them
* up to 8 threads can concurrently load nodes into the cache,
nodes are partitioned into 8 mutexes (8 is chosen arbitrarily, might
need tuning)
* concurrent editing is not supported though
* this is fine for MyISAM, TL_WRITE protects the TABLE_SHARE and the
graph (note that TL_WRITE_CONCURRENT_INSERT is not allowed, because an
INSERT into the main table means multiple UPDATEs in the graph)
* InnoDB uses secondary transaction-level caches linked in a list in
in thd->ha_data via a fake handlerton
* on rollback the secondary cache is discarded, on commit nodes
from the secondary cache are invalidated in the shared cache
while it is exclusively locked
* on savepoint rollback both caches are flushed. this can be improved
in the future with a row visibility callback
* graph size is controlled by @@mhnsw_cache_size, the cache is flushed
when it reaches the threshold
2024-07-17 17:16:28 +02:00
|
|
|
int err= MHNSW_Context::acquire(&ctx, table, false);
|
|
|
|
SCOPE_EXIT([ctx, table](){ ctx->release(table); });
|
|
|
|
if (err)
|
initial support for vector indexes
MDEV-33407 Parser support for vector indexes
The syntax is
create table t1 (... vector index (v) ...);
limitation:
* v is a binary string and NOT NULL
* only one vector index per table
* temporary tables are not supported
MDEV-33404 Engine-independent indexes: subtable method
added support for so-called "high level indexes", they are not visible
to the storage engine, implemented on the sql level. For every such
an index in a table, say, t1, the server implicitly creates a second
table named, like, t1#i#05 (where "05" is the index number in t1).
This table has a fixed structure, no frm, not accessible directly,
doesn't go into the table cache, needs no MDLs.
MDEV-33406 basic optimizer support for k-NN searches
for a query like SELECT ... ORDER BY func() optimizer will use
item_func->part_of_sortkey() to decide what keys can be used
to resolve ORDER BY.
2024-01-17 15:32:45 +01:00
|
|
|
return err;
|
|
|
|
|
mhnsw: inter-statement shared cache
* preserve the graph in memory between statements
* keep it in a TABLE_SHARE, available for concurrent searches
* nodes are generally read-only, walking the graph doesn't change them
* distance to target is cached, calculated only once
* SIMD-optimized bloom filter detects visited nodes
* nodes are stored in an array, not List, to better utilize bloom filter
* auto-adjusting heuristic to estimate the number of visited nodes
(to configure the bloom filter)
* many threads can concurrently walk the graph. MEM_ROOT and Hash_set
are protected with a mutex, but walking doesn't need them
* up to 8 threads can concurrently load nodes into the cache,
nodes are partitioned into 8 mutexes (8 is chosen arbitrarily, might
need tuning)
* concurrent editing is not supported though
* this is fine for MyISAM, TL_WRITE protects the TABLE_SHARE and the
graph (note that TL_WRITE_CONCURRENT_INSERT is not allowed, because an
INSERT into the main table means multiple UPDATEs in the graph)
* InnoDB uses secondary transaction-level caches linked in a list in
in thd->ha_data via a fake handlerton
* on rollback the secondary cache is discarded, on commit nodes
from the secondary cache are invalidated in the shared cache
while it is exclusively locked
* on savepoint rollback both caches are flushed. this can be improved
in the future with a row visibility callback
* graph size is controlled by @@mhnsw_cache_size, the cache is flushed
when it reaches the threshold
2024-07-17 17:16:28 +02:00
|
|
|
Neighborhood candidates, start_nodes;
|
2024-07-22 21:24:11 +02:00
|
|
|
candidates.init(thd->alloc<FVectorNode*>(limit + 7), limit);
|
|
|
|
start_nodes.init(thd->alloc<FVectorNode*>(limit + 7), limit);
|
2024-06-01 00:17:05 +02:00
|
|
|
|
2024-06-07 00:31:49 +02:00
|
|
|
// one could put all max_layer nodes in start_nodes
|
|
|
|
// but it has no effect on the recall or speed
|
mhnsw: inter-statement shared cache
* preserve the graph in memory between statements
* keep it in a TABLE_SHARE, available for concurrent searches
* nodes are generally read-only, walking the graph doesn't change them
* distance to target is cached, calculated only once
* SIMD-optimized bloom filter detects visited nodes
* nodes are stored in an array, not List, to better utilize bloom filter
* auto-adjusting heuristic to estimate the number of visited nodes
(to configure the bloom filter)
* many threads can concurrently walk the graph. MEM_ROOT and Hash_set
are protected with a mutex, but walking doesn't need them
* up to 8 threads can concurrently load nodes into the cache,
nodes are partitioned into 8 mutexes (8 is chosen arbitrarily, might
need tuning)
* concurrent editing is not supported though
* this is fine for MyISAM, TL_WRITE protects the TABLE_SHARE and the
graph (note that TL_WRITE_CONCURRENT_INSERT is not allowed, because an
INSERT into the main table means multiple UPDATEs in the graph)
* InnoDB uses secondary transaction-level caches linked in a list in
in thd->ha_data via a fake handlerton
* on rollback the secondary cache is discarded, on commit nodes
from the secondary cache are invalidated in the shared cache
while it is exclusively locked
* on savepoint rollback both caches are flushed. this can be improved
in the future with a row visibility callback
* graph size is controlled by @@mhnsw_cache_size, the cache is flushed
when it reaches the threshold
2024-07-17 17:16:28 +02:00
|
|
|
start_nodes.links[start_nodes.num++]= ctx->start;
|
2024-06-01 00:17:05 +02:00
|
|
|
|
|
|
|
/*
|
|
|
|
if the query vector is NULL or invalid, VEC_DISTANCE will return
|
|
|
|
NULL, so the result is basically unsorted, we can return rows
|
2024-07-19 12:25:25 +02:00
|
|
|
in any order. Let's use some hardcoded value here
|
2024-06-01 00:17:05 +02:00
|
|
|
*/
|
mhnsw: inter-statement shared cache
* preserve the graph in memory between statements
* keep it in a TABLE_SHARE, available for concurrent searches
* nodes are generally read-only, walking the graph doesn't change them
* distance to target is cached, calculated only once
* SIMD-optimized bloom filter detects visited nodes
* nodes are stored in an array, not List, to better utilize bloom filter
* auto-adjusting heuristic to estimate the number of visited nodes
(to configure the bloom filter)
* many threads can concurrently walk the graph. MEM_ROOT and Hash_set
are protected with a mutex, but walking doesn't need them
* up to 8 threads can concurrently load nodes into the cache,
nodes are partitioned into 8 mutexes (8 is chosen arbitrarily, might
need tuning)
* concurrent editing is not supported though
* this is fine for MyISAM, TL_WRITE protects the TABLE_SHARE and the
graph (note that TL_WRITE_CONCURRENT_INSERT is not allowed, because an
INSERT into the main table means multiple UPDATEs in the graph)
* InnoDB uses secondary transaction-level caches linked in a list in
in thd->ha_data via a fake handlerton
* on rollback the secondary cache is discarded, on commit nodes
from the secondary cache are invalidated in the shared cache
while it is exclusively locked
* on savepoint rollback both caches are flushed. this can be improved
in the future with a row visibility callback
* graph size is controlled by @@mhnsw_cache_size, the cache is flushed
when it reaches the threshold
2024-07-17 17:16:28 +02:00
|
|
|
if (!res || ctx->byte_len != res->length())
|
2024-07-19 12:25:25 +02:00
|
|
|
{
|
|
|
|
res= &buf;
|
|
|
|
buf.alloc(ctx->byte_len);
|
|
|
|
buf.length(ctx->byte_len);
|
|
|
|
for (size_t i=0; i < ctx->vec_len; i++)
|
|
|
|
((float*)buf.ptr())[i]= i == 0;
|
|
|
|
}
|
mhnsw: inter-statement shared cache
* preserve the graph in memory between statements
* keep it in a TABLE_SHARE, available for concurrent searches
* nodes are generally read-only, walking the graph doesn't change them
* distance to target is cached, calculated only once
* SIMD-optimized bloom filter detects visited nodes
* nodes are stored in an array, not List, to better utilize bloom filter
* auto-adjusting heuristic to estimate the number of visited nodes
(to configure the bloom filter)
* many threads can concurrently walk the graph. MEM_ROOT and Hash_set
are protected with a mutex, but walking doesn't need them
* up to 8 threads can concurrently load nodes into the cache,
nodes are partitioned into 8 mutexes (8 is chosen arbitrarily, might
need tuning)
* concurrent editing is not supported though
* this is fine for MyISAM, TL_WRITE protects the TABLE_SHARE and the
graph (note that TL_WRITE_CONCURRENT_INSERT is not allowed, because an
INSERT into the main table means multiple UPDATEs in the graph)
* InnoDB uses secondary transaction-level caches linked in a list in
in thd->ha_data via a fake handlerton
* on rollback the secondary cache is discarded, on commit nodes
from the secondary cache are invalidated in the shared cache
while it is exclusively locked
* on savepoint rollback both caches are flushed. this can be improved
in the future with a row visibility callback
* graph size is controlled by @@mhnsw_cache_size, the cache is flushed
when it reaches the threshold
2024-07-17 17:16:28 +02:00
|
|
|
|
|
|
|
const longlong max_layer= start_nodes.links[0]->max_layer;
|
2024-07-19 12:25:25 +02:00
|
|
|
auto target= FVector::create(thd->alloc(FVector::alloc_size(ctx->vec_len)),
|
|
|
|
res->ptr(), res->length());
|
2024-06-13 23:24:51 +02:00
|
|
|
|
|
|
|
if (int err= graph->file->ha_rnd_init(0))
|
|
|
|
return err;
|
|
|
|
SCOPE_EXIT([graph](){ graph->file->ha_rnd_end(); });
|
2024-06-01 00:17:05 +02:00
|
|
|
|
Initial HNSW implementation
This commit includes the work done in collaboration with Hugo Wen from
Amazon:
MDEV-33408 Alter HNSW graph storage and fix memory leak
This commit changes the way HNSW graph information is stored in the
second table. Instead of storing connections as separate records, it now
stores neighbors for each node, leading to significant performance
improvements and storage savings.
Comparing with the previous approach, the insert speed is 5 times faster,
search speed improves by 23%, and storage usage is reduced by 73%, based
on ann-benchmark tests with random-xs-20-euclidean and
random-s-100-euclidean datasets.
Additionally, in previous code, vector objects were not released after
use, resulting in excessive memory consumption (over 20GB for building
the index with 90,000 records), preventing tests with large datasets.
Now ensure that vectors are released appropriately during the insert and
search functions. Note there are still some vectors that need to be
cleaned up after search query completion. Needs to be addressed in a
future commit.
All new code of the whole pull request, including one or several files
that are either new files or modified ones, are contributed under the
BSD-new license. I am contributing on behalf of my employer Amazon Web
Services, Inc.
As well as the commit:
Introduce session variables to manage HNSW index parameters
Three variables:
hnsw_max_connection_per_layer
hnsw_ef_constructor
hnsw_ef_search
ann-benchmark tool is also updated to support these variables in commit
https://github.com/HugoWenTD/ann-benchmarks/commit/e09784e for branch
https://github.com/HugoWenTD/ann-benchmarks/tree/mariadb-configurable
All new code of the whole pull request, including one or several files
that are either new files or modified ones, are contributed under the
BSD-new license. I am contributing on behalf of my employer Amazon Web
Services, Inc.
Co-authored-by: Hugo Wen <wenhug@amazon.com>
2024-02-17 17:03:30 +02:00
|
|
|
for (size_t cur_layer= max_layer; cur_layer > 0; cur_layer--)
|
|
|
|
{
|
mhnsw: inter-statement shared cache
* preserve the graph in memory between statements
* keep it in a TABLE_SHARE, available for concurrent searches
* nodes are generally read-only, walking the graph doesn't change them
* distance to target is cached, calculated only once
* SIMD-optimized bloom filter detects visited nodes
* nodes are stored in an array, not List, to better utilize bloom filter
* auto-adjusting heuristic to estimate the number of visited nodes
(to configure the bloom filter)
* many threads can concurrently walk the graph. MEM_ROOT and Hash_set
are protected with a mutex, but walking doesn't need them
* up to 8 threads can concurrently load nodes into the cache,
nodes are partitioned into 8 mutexes (8 is chosen arbitrarily, might
need tuning)
* concurrent editing is not supported though
* this is fine for MyISAM, TL_WRITE protects the TABLE_SHARE and the
graph (note that TL_WRITE_CONCURRENT_INSERT is not allowed, because an
INSERT into the main table means multiple UPDATEs in the graph)
* InnoDB uses secondary transaction-level caches linked in a list in
in thd->ha_data via a fake handlerton
* on rollback the secondary cache is discarded, on commit nodes
from the secondary cache are invalidated in the shared cache
while it is exclusively locked
* on savepoint rollback both caches are flushed. this can be improved
in the future with a row visibility callback
* graph size is controlled by @@mhnsw_cache_size, the cache is flushed
when it reaches the threshold
2024-07-17 17:16:28 +02:00
|
|
|
if (int err= search_layer(ctx, graph, target, &start_nodes, 1, cur_layer,
|
2024-07-16 15:15:17 +02:00
|
|
|
&candidates, false))
|
mhnsw: inter-statement shared cache
* preserve the graph in memory between statements
* keep it in a TABLE_SHARE, available for concurrent searches
* nodes are generally read-only, walking the graph doesn't change them
* distance to target is cached, calculated only once
* SIMD-optimized bloom filter detects visited nodes
* nodes are stored in an array, not List, to better utilize bloom filter
* auto-adjusting heuristic to estimate the number of visited nodes
(to configure the bloom filter)
* many threads can concurrently walk the graph. MEM_ROOT and Hash_set
are protected with a mutex, but walking doesn't need them
* up to 8 threads can concurrently load nodes into the cache,
nodes are partitioned into 8 mutexes (8 is chosen arbitrarily, might
need tuning)
* concurrent editing is not supported though
* this is fine for MyISAM, TL_WRITE protects the TABLE_SHARE and the
graph (note that TL_WRITE_CONCURRENT_INSERT is not allowed, because an
INSERT into the main table means multiple UPDATEs in the graph)
* InnoDB uses secondary transaction-level caches linked in a list in
in thd->ha_data via a fake handlerton
* on rollback the secondary cache is discarded, on commit nodes
from the secondary cache are invalidated in the shared cache
while it is exclusively locked
* on savepoint rollback both caches are flushed. this can be improved
in the future with a row visibility callback
* graph size is controlled by @@mhnsw_cache_size, the cache is flushed
when it reaches the threshold
2024-07-17 17:16:28 +02:00
|
|
|
return err;
|
|
|
|
std::swap(start_nodes, candidates);
|
initial support for vector indexes
MDEV-33407 Parser support for vector indexes
The syntax is
create table t1 (... vector index (v) ...);
limitation:
* v is a binary string and NOT NULL
* only one vector index per table
* temporary tables are not supported
MDEV-33404 Engine-independent indexes: subtable method
added support for so-called "high level indexes", they are not visible
to the storage engine, implemented on the sql level. For every such
an index in a table, say, t1, the server implicitly creates a second
table named, like, t1#i#05 (where "05" is the index number in t1).
This table has a fixed structure, no frm, not accessible directly,
doesn't go into the table cache, needs no MDLs.
MDEV-33406 basic optimizer support for k-NN searches
for a query like SELECT ... ORDER BY func() optimizer will use
item_func->part_of_sortkey() to decide what keys can be used
to resolve ORDER BY.
2024-01-17 15:32:45 +01:00
|
|
|
}
|
|
|
|
|
2024-07-22 21:24:11 +02:00
|
|
|
if (int err= search_layer(ctx, graph, target, &start_nodes,
|
|
|
|
static_cast<uint>(limit), 0, &candidates, false))
|
mhnsw: inter-statement shared cache
* preserve the graph in memory between statements
* keep it in a TABLE_SHARE, available for concurrent searches
* nodes are generally read-only, walking the graph doesn't change them
* distance to target is cached, calculated only once
* SIMD-optimized bloom filter detects visited nodes
* nodes are stored in an array, not List, to better utilize bloom filter
* auto-adjusting heuristic to estimate the number of visited nodes
(to configure the bloom filter)
* many threads can concurrently walk the graph. MEM_ROOT and Hash_set
are protected with a mutex, but walking doesn't need them
* up to 8 threads can concurrently load nodes into the cache,
nodes are partitioned into 8 mutexes (8 is chosen arbitrarily, might
need tuning)
* concurrent editing is not supported though
* this is fine for MyISAM, TL_WRITE protects the TABLE_SHARE and the
graph (note that TL_WRITE_CONCURRENT_INSERT is not allowed, because an
INSERT into the main table means multiple UPDATEs in the graph)
* InnoDB uses secondary transaction-level caches linked in a list in
in thd->ha_data via a fake handlerton
* on rollback the secondary cache is discarded, on commit nodes
from the secondary cache are invalidated in the shared cache
while it is exclusively locked
* on savepoint rollback both caches are flushed. this can be improved
in the future with a row visibility callback
* graph size is controlled by @@mhnsw_cache_size, the cache is flushed
when it reaches the threshold
2024-07-17 17:16:28 +02:00
|
|
|
return err;
|
Initial HNSW implementation
This commit includes the work done in collaboration with Hugo Wen from
Amazon:
MDEV-33408 Alter HNSW graph storage and fix memory leak
This commit changes the way HNSW graph information is stored in the
second table. Instead of storing connections as separate records, it now
stores neighbors for each node, leading to significant performance
improvements and storage savings.
Comparing with the previous approach, the insert speed is 5 times faster,
search speed improves by 23%, and storage usage is reduced by 73%, based
on ann-benchmark tests with random-xs-20-euclidean and
random-s-100-euclidean datasets.
Additionally, in previous code, vector objects were not released after
use, resulting in excessive memory consumption (over 20GB for building
the index with 90,000 records), preventing tests with large datasets.
Now ensure that vectors are released appropriately during the insert and
search functions. Note there are still some vectors that need to be
cleaned up after search query completion. Needs to be addressed in a
future commit.
All new code of the whole pull request, including one or several files
that are either new files or modified ones, are contributed under the
BSD-new license. I am contributing on behalf of my employer Amazon Web
Services, Inc.
As well as the commit:
Introduce session variables to manage HNSW index parameters
Three variables:
hnsw_max_connection_per_layer
hnsw_ef_constructor
hnsw_ef_search
ann-benchmark tool is also updated to support these variables in commit
https://github.com/HugoWenTD/ann-benchmarks/commit/e09784e for branch
https://github.com/HugoWenTD/ann-benchmarks/tree/mariadb-configurable
All new code of the whole pull request, including one or several files
that are either new files or modified ones, are contributed under the
BSD-new license. I am contributing on behalf of my employer Amazon Web
Services, Inc.
Co-authored-by: Hugo Wen <wenhug@amazon.com>
2024-02-17 17:03:30 +02:00
|
|
|
|
mhnsw: inter-statement shared cache
* preserve the graph in memory between statements
* keep it in a TABLE_SHARE, available for concurrent searches
* nodes are generally read-only, walking the graph doesn't change them
* distance to target is cached, calculated only once
* SIMD-optimized bloom filter detects visited nodes
* nodes are stored in an array, not List, to better utilize bloom filter
* auto-adjusting heuristic to estimate the number of visited nodes
(to configure the bloom filter)
* many threads can concurrently walk the graph. MEM_ROOT and Hash_set
are protected with a mutex, but walking doesn't need them
* up to 8 threads can concurrently load nodes into the cache,
nodes are partitioned into 8 mutexes (8 is chosen arbitrarily, might
need tuning)
* concurrent editing is not supported though
* this is fine for MyISAM, TL_WRITE protects the TABLE_SHARE and the
graph (note that TL_WRITE_CONCURRENT_INSERT is not allowed, because an
INSERT into the main table means multiple UPDATEs in the graph)
* InnoDB uses secondary transaction-level caches linked in a list in
in thd->ha_data via a fake handlerton
* on rollback the secondary cache is discarded, on commit nodes
from the secondary cache are invalidated in the shared cache
while it is exclusively locked
* on savepoint rollback both caches are flushed. this can be improved
in the future with a row visibility callback
* graph size is controlled by @@mhnsw_cache_size, the cache is flushed
when it reaches the threshold
2024-07-17 17:16:28 +02:00
|
|
|
if (limit > candidates.num)
|
|
|
|
limit= candidates.num;
|
|
|
|
size_t context_size= limit * ctx->tref_len + sizeof(ulonglong);
|
2024-06-03 11:22:21 +02:00
|
|
|
char *context= thd->alloc(context_size);
|
initial support for vector indexes
MDEV-33407 Parser support for vector indexes
The syntax is
create table t1 (... vector index (v) ...);
limitation:
* v is a binary string and NOT NULL
* only one vector index per table
* temporary tables are not supported
MDEV-33404 Engine-independent indexes: subtable method
added support for so-called "high level indexes", they are not visible
to the storage engine, implemented on the sql level. For every such
an index in a table, say, t1, the server implicitly creates a second
table named, like, t1#i#05 (where "05" is the index number in t1).
This table has a fixed structure, no frm, not accessible directly,
doesn't go into the table cache, needs no MDLs.
MDEV-33406 basic optimizer support for k-NN searches
for a query like SELECT ... ORDER BY func() optimizer will use
item_func->part_of_sortkey() to decide what keys can be used
to resolve ORDER BY.
2024-01-17 15:32:45 +01:00
|
|
|
graph->context= context;
|
|
|
|
|
2024-06-03 11:22:21 +02:00
|
|
|
*(ulonglong*)context= limit;
|
|
|
|
context+= context_size;
|
|
|
|
|
mhnsw: inter-statement shared cache
* preserve the graph in memory between statements
* keep it in a TABLE_SHARE, available for concurrent searches
* nodes are generally read-only, walking the graph doesn't change them
* distance to target is cached, calculated only once
* SIMD-optimized bloom filter detects visited nodes
* nodes are stored in an array, not List, to better utilize bloom filter
* auto-adjusting heuristic to estimate the number of visited nodes
(to configure the bloom filter)
* many threads can concurrently walk the graph. MEM_ROOT and Hash_set
are protected with a mutex, but walking doesn't need them
* up to 8 threads can concurrently load nodes into the cache,
nodes are partitioned into 8 mutexes (8 is chosen arbitrarily, might
need tuning)
* concurrent editing is not supported though
* this is fine for MyISAM, TL_WRITE protects the TABLE_SHARE and the
graph (note that TL_WRITE_CONCURRENT_INSERT is not allowed, because an
INSERT into the main table means multiple UPDATEs in the graph)
* InnoDB uses secondary transaction-level caches linked in a list in
in thd->ha_data via a fake handlerton
* on rollback the secondary cache is discarded, on commit nodes
from the secondary cache are invalidated in the shared cache
while it is exclusively locked
* on savepoint rollback both caches are flushed. this can be improved
in the future with a row visibility callback
* graph size is controlled by @@mhnsw_cache_size, the cache is flushed
when it reaches the threshold
2024-07-17 17:16:28 +02:00
|
|
|
for (size_t i=0; limit--; i++)
|
2024-06-03 11:22:21 +02:00
|
|
|
{
|
mhnsw: inter-statement shared cache
* preserve the graph in memory between statements
* keep it in a TABLE_SHARE, available for concurrent searches
* nodes are generally read-only, walking the graph doesn't change them
* distance to target is cached, calculated only once
* SIMD-optimized bloom filter detects visited nodes
* nodes are stored in an array, not List, to better utilize bloom filter
* auto-adjusting heuristic to estimate the number of visited nodes
(to configure the bloom filter)
* many threads can concurrently walk the graph. MEM_ROOT and Hash_set
are protected with a mutex, but walking doesn't need them
* up to 8 threads can concurrently load nodes into the cache,
nodes are partitioned into 8 mutexes (8 is chosen arbitrarily, might
need tuning)
* concurrent editing is not supported though
* this is fine for MyISAM, TL_WRITE protects the TABLE_SHARE and the
graph (note that TL_WRITE_CONCURRENT_INSERT is not allowed, because an
INSERT into the main table means multiple UPDATEs in the graph)
* InnoDB uses secondary transaction-level caches linked in a list in
in thd->ha_data via a fake handlerton
* on rollback the secondary cache is discarded, on commit nodes
from the secondary cache are invalidated in the shared cache
while it is exclusively locked
* on savepoint rollback both caches are flushed. this can be improved
in the future with a row visibility callback
* graph size is controlled by @@mhnsw_cache_size, the cache is flushed
when it reaches the threshold
2024-07-17 17:16:28 +02:00
|
|
|
context-= ctx->tref_len;
|
|
|
|
memcpy(context, candidates.links[i]->tref(), ctx->tref_len);
|
2024-06-03 11:22:21 +02:00
|
|
|
}
|
|
|
|
DBUG_ASSERT(context - sizeof(ulonglong) == graph->context);
|
Initial HNSW implementation
This commit includes the work done in collaboration with Hugo Wen from
Amazon:
MDEV-33408 Alter HNSW graph storage and fix memory leak
This commit changes the way HNSW graph information is stored in the
second table. Instead of storing connections as separate records, it now
stores neighbors for each node, leading to significant performance
improvements and storage savings.
Comparing with the previous approach, the insert speed is 5 times faster,
search speed improves by 23%, and storage usage is reduced by 73%, based
on ann-benchmark tests with random-xs-20-euclidean and
random-s-100-euclidean datasets.
Additionally, in previous code, vector objects were not released after
use, resulting in excessive memory consumption (over 20GB for building
the index with 90,000 records), preventing tests with large datasets.
Now ensure that vectors are released appropriately during the insert and
search functions. Note there are still some vectors that need to be
cleaned up after search query completion. Needs to be addressed in a
future commit.
All new code of the whole pull request, including one or several files
that are either new files or modified ones, are contributed under the
BSD-new license. I am contributing on behalf of my employer Amazon Web
Services, Inc.
As well as the commit:
Introduce session variables to manage HNSW index parameters
Three variables:
hnsw_max_connection_per_layer
hnsw_ef_constructor
hnsw_ef_search
ann-benchmark tool is also updated to support these variables in commit
https://github.com/HugoWenTD/ann-benchmarks/commit/e09784e for branch
https://github.com/HugoWenTD/ann-benchmarks/tree/mariadb-configurable
All new code of the whole pull request, including one or several files
that are either new files or modified ones, are contributed under the
BSD-new license. I am contributing on behalf of my employer Amazon Web
Services, Inc.
Co-authored-by: Hugo Wen <wenhug@amazon.com>
2024-02-17 17:03:30 +02:00
|
|
|
|
2024-06-01 00:17:05 +02:00
|
|
|
return mhnsw_read_next(table);
|
initial support for vector indexes
MDEV-33407 Parser support for vector indexes
The syntax is
create table t1 (... vector index (v) ...);
limitation:
* v is a binary string and NOT NULL
* only one vector index per table
* temporary tables are not supported
MDEV-33404 Engine-independent indexes: subtable method
added support for so-called "high level indexes", they are not visible
to the storage engine, implemented on the sql level. For every such
an index in a table, say, t1, the server implicitly creates a second
table named, like, t1#i#05 (where "05" is the index number in t1).
This table has a fixed structure, no frm, not accessible directly,
doesn't go into the table cache, needs no MDLs.
MDEV-33406 basic optimizer support for k-NN searches
for a query like SELECT ... ORDER BY func() optimizer will use
item_func->part_of_sortkey() to decide what keys can be used
to resolve ORDER BY.
2024-01-17 15:32:45 +01:00
|
|
|
}
|
|
|
|
|
|
|
|
int mhnsw_read_next(TABLE *table)
|
|
|
|
{
|
2024-06-03 11:22:21 +02:00
|
|
|
uchar *ref= (uchar*)(table->hlindex->context);
|
|
|
|
if (ulonglong *limit= (ulonglong*)ref)
|
Initial HNSW implementation
This commit includes the work done in collaboration with Hugo Wen from
Amazon:
MDEV-33408 Alter HNSW graph storage and fix memory leak
This commit changes the way HNSW graph information is stored in the
second table. Instead of storing connections as separate records, it now
stores neighbors for each node, leading to significant performance
improvements and storage savings.
Comparing with the previous approach, the insert speed is 5 times faster,
search speed improves by 23%, and storage usage is reduced by 73%, based
on ann-benchmark tests with random-xs-20-euclidean and
random-s-100-euclidean datasets.
Additionally, in previous code, vector objects were not released after
use, resulting in excessive memory consumption (over 20GB for building
the index with 90,000 records), preventing tests with large datasets.
Now ensure that vectors are released appropriately during the insert and
search functions. Note there are still some vectors that need to be
cleaned up after search query completion. Needs to be addressed in a
future commit.
All new code of the whole pull request, including one or several files
that are either new files or modified ones, are contributed under the
BSD-new license. I am contributing on behalf of my employer Amazon Web
Services, Inc.
As well as the commit:
Introduce session variables to manage HNSW index parameters
Three variables:
hnsw_max_connection_per_layer
hnsw_ef_constructor
hnsw_ef_search
ann-benchmark tool is also updated to support these variables in commit
https://github.com/HugoWenTD/ann-benchmarks/commit/e09784e for branch
https://github.com/HugoWenTD/ann-benchmarks/tree/mariadb-configurable
All new code of the whole pull request, including one or several files
that are either new files or modified ones, are contributed under the
BSD-new license. I am contributing on behalf of my employer Amazon Web
Services, Inc.
Co-authored-by: Hugo Wen <wenhug@amazon.com>
2024-02-17 17:03:30 +02:00
|
|
|
{
|
2024-06-03 11:22:21 +02:00
|
|
|
ref+= sizeof(ulonglong) + (--*limit) * table->file->ref_length;
|
|
|
|
return table->file->ha_rnd_pos(table->record[0], ref);
|
Initial HNSW implementation
This commit includes the work done in collaboration with Hugo Wen from
Amazon:
MDEV-33408 Alter HNSW graph storage and fix memory leak
This commit changes the way HNSW graph information is stored in the
second table. Instead of storing connections as separate records, it now
stores neighbors for each node, leading to significant performance
improvements and storage savings.
Comparing with the previous approach, the insert speed is 5 times faster,
search speed improves by 23%, and storage usage is reduced by 73%, based
on ann-benchmark tests with random-xs-20-euclidean and
random-s-100-euclidean datasets.
Additionally, in previous code, vector objects were not released after
use, resulting in excessive memory consumption (over 20GB for building
the index with 90,000 records), preventing tests with large datasets.
Now ensure that vectors are released appropriately during the insert and
search functions. Note there are still some vectors that need to be
cleaned up after search query completion. Needs to be addressed in a
future commit.
All new code of the whole pull request, including one or several files
that are either new files or modified ones, are contributed under the
BSD-new license. I am contributing on behalf of my employer Amazon Web
Services, Inc.
As well as the commit:
Introduce session variables to manage HNSW index parameters
Three variables:
hnsw_max_connection_per_layer
hnsw_ef_constructor
hnsw_ef_search
ann-benchmark tool is also updated to support these variables in commit
https://github.com/HugoWenTD/ann-benchmarks/commit/e09784e for branch
https://github.com/HugoWenTD/ann-benchmarks/tree/mariadb-configurable
All new code of the whole pull request, including one or several files
that are either new files or modified ones, are contributed under the
BSD-new license. I am contributing on behalf of my employer Amazon Web
Services, Inc.
Co-authored-by: Hugo Wen <wenhug@amazon.com>
2024-02-17 17:03:30 +02:00
|
|
|
}
|
mhnsw: inter-statement shared cache
* preserve the graph in memory between statements
* keep it in a TABLE_SHARE, available for concurrent searches
* nodes are generally read-only, walking the graph doesn't change them
* distance to target is cached, calculated only once
* SIMD-optimized bloom filter detects visited nodes
* nodes are stored in an array, not List, to better utilize bloom filter
* auto-adjusting heuristic to estimate the number of visited nodes
(to configure the bloom filter)
* many threads can concurrently walk the graph. MEM_ROOT and Hash_set
are protected with a mutex, but walking doesn't need them
* up to 8 threads can concurrently load nodes into the cache,
nodes are partitioned into 8 mutexes (8 is chosen arbitrarily, might
need tuning)
* concurrent editing is not supported though
* this is fine for MyISAM, TL_WRITE protects the TABLE_SHARE and the
graph (note that TL_WRITE_CONCURRENT_INSERT is not allowed, because an
INSERT into the main table means multiple UPDATEs in the graph)
* InnoDB uses secondary transaction-level caches linked in a list in
in thd->ha_data via a fake handlerton
* on rollback the secondary cache is discarded, on commit nodes
from the secondary cache are invalidated in the shared cache
while it is exclusively locked
* on savepoint rollback both caches are flushed. this can be improved
in the future with a row visibility callback
* graph size is controlled by @@mhnsw_cache_size, the cache is flushed
when it reaches the threshold
2024-07-17 17:16:28 +02:00
|
|
|
return my_errno= HA_ERR_END_OF_FILE;
|
|
|
|
}
|
|
|
|
|
|
|
|
void mhnsw_free(TABLE_SHARE *share)
|
|
|
|
{
|
|
|
|
TABLE_SHARE *graph_share= share->hlindex;
|
|
|
|
if (!graph_share->hlindex_data)
|
|
|
|
return;
|
|
|
|
|
|
|
|
static_cast<MHNSW_Context*>(graph_share->hlindex_data)->~MHNSW_Context();
|
|
|
|
graph_share->hlindex_data= 0;
|
initial support for vector indexes
MDEV-33407 Parser support for vector indexes
The syntax is
create table t1 (... vector index (v) ...);
limitation:
* v is a binary string and NOT NULL
* only one vector index per table
* temporary tables are not supported
MDEV-33404 Engine-independent indexes: subtable method
added support for so-called "high level indexes", they are not visible
to the storage engine, implemented on the sql level. For every such
an index in a table, say, t1, the server implicitly creates a second
table named, like, t1#i#05 (where "05" is the index number in t1).
This table has a fixed structure, no frm, not accessible directly,
doesn't go into the table cache, needs no MDLs.
MDEV-33406 basic optimizer support for k-NN searches
for a query like SELECT ... ORDER BY func() optimizer will use
item_func->part_of_sortkey() to decide what keys can be used
to resolve ORDER BY.
2024-01-17 15:32:45 +01:00
|
|
|
}
|
2024-06-07 13:50:13 +02:00
|
|
|
|
2024-07-18 14:43:47 +02:00
|
|
|
int mhnsw_invalidate(TABLE *table, const uchar *rec, KEY *keyinfo)
|
2024-07-16 15:15:17 +02:00
|
|
|
{
|
|
|
|
TABLE *graph= table->hlindex;
|
|
|
|
handler *h= table->file;
|
2024-07-18 14:43:47 +02:00
|
|
|
MHNSW_Context *ctx;
|
|
|
|
bool use_ctx= !MHNSW_Context::acquire(&ctx, table, true);
|
2024-07-16 15:15:17 +02:00
|
|
|
|
|
|
|
/* metadata are checked on open */
|
|
|
|
DBUG_ASSERT(graph);
|
|
|
|
DBUG_ASSERT(keyinfo->algorithm == HA_KEY_ALG_VECTOR);
|
|
|
|
DBUG_ASSERT(keyinfo->usable_key_parts == 1);
|
2024-07-18 14:43:47 +02:00
|
|
|
DBUG_ASSERT(h->ref_length <= graph->field[FIELD_TREF]->field_length);
|
2024-07-16 15:15:17 +02:00
|
|
|
|
|
|
|
// target record:
|
|
|
|
h->position(rec);
|
|
|
|
graph->field[FIELD_TREF]->set_notnull();
|
2024-07-18 14:43:47 +02:00
|
|
|
graph->field[FIELD_TREF]->store_binary(h->ref, h->ref_length);
|
2024-07-16 15:15:17 +02:00
|
|
|
|
2024-07-18 14:43:47 +02:00
|
|
|
uchar *key= (uchar*)alloca(graph->key_info[IDX_TREF].key_length);
|
|
|
|
key_copy(key, graph->record[0], &graph->key_info[IDX_TREF],
|
|
|
|
graph->key_info[IDX_TREF].key_length);
|
2024-07-16 15:15:17 +02:00
|
|
|
|
2024-07-18 14:43:47 +02:00
|
|
|
if (int err= graph->file->ha_index_read_idx_map(graph->record[1], IDX_TREF,
|
|
|
|
key, HA_WHOLE_KEY, HA_READ_KEY_EXACT))
|
|
|
|
return err;
|
|
|
|
|
|
|
|
restore_record(graph, record[1]);
|
|
|
|
graph->field[FIELD_TREF]->set_null();
|
|
|
|
if (int err= graph->file->ha_update_row(graph->record[1], graph->record[0]))
|
|
|
|
return err;
|
2024-07-16 15:15:17 +02:00
|
|
|
|
2024-07-18 14:43:47 +02:00
|
|
|
if (use_ctx)
|
2024-07-16 15:15:17 +02:00
|
|
|
{
|
2024-07-18 14:43:47 +02:00
|
|
|
graph->file->position(graph->record[0]);
|
|
|
|
FVectorNode *node= ctx->get_node(graph->file->ref);
|
|
|
|
node->deleted= true;
|
|
|
|
ctx->release(table);
|
2024-07-16 15:15:17 +02:00
|
|
|
}
|
|
|
|
|
2024-07-18 14:43:47 +02:00
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
int mhnsw_delete_all(TABLE *table, KEY *keyinfo)
|
|
|
|
{
|
|
|
|
TABLE *graph= table->hlindex;
|
|
|
|
|
|
|
|
/* metadata are checked on open */
|
|
|
|
DBUG_ASSERT(graph);
|
|
|
|
DBUG_ASSERT(keyinfo->algorithm == HA_KEY_ALG_VECTOR);
|
|
|
|
DBUG_ASSERT(keyinfo->usable_key_parts == 1);
|
2024-07-16 15:15:17 +02:00
|
|
|
|
2024-07-18 14:43:47 +02:00
|
|
|
if (int err= graph->file->ha_delete_all_rows())
|
|
|
|
return err;
|
|
|
|
|
|
|
|
MHNSW_Context *ctx;
|
|
|
|
if (!MHNSW_Context::acquire(&ctx, table, true))
|
|
|
|
{
|
|
|
|
ctx->reset(table->s);
|
|
|
|
ctx->release(table);
|
|
|
|
}
|
2024-07-16 15:15:17 +02:00
|
|
|
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2024-06-07 13:50:13 +02:00
|
|
|
const LEX_CSTRING mhnsw_hlindex_table_def(THD *thd, uint ref_length)
|
|
|
|
{
|
|
|
|
const char templ[]="CREATE TABLE i ( "
|
2024-06-13 23:24:51 +02:00
|
|
|
" layer tinyint not null, "
|
|
|
|
" tref varbinary(%u), "
|
|
|
|
" vec blob not null, "
|
|
|
|
" neighbors blob not null, "
|
2024-07-18 14:43:47 +02:00
|
|
|
" unique (tref), "
|
|
|
|
" key (layer)) ";
|
2024-06-07 13:50:13 +02:00
|
|
|
size_t len= sizeof(templ) + 32;
|
|
|
|
char *s= thd->alloc(len);
|
2024-06-13 23:24:51 +02:00
|
|
|
len= my_snprintf(s, len, templ, ref_length);
|
2024-06-07 13:50:13 +02:00
|
|
|
return {s, len};
|
|
|
|
}
|