Skip to content

Commit

Permalink
Apply suggestions from code review
Browse files Browse the repository at this point in the history
Signed-off-by: kolchfa-aws <[email protected]>
  • Loading branch information
kolchfa-aws authored Sep 17, 2024
1 parent 4ba0fd1 commit 42ebaa1
Showing 1 changed file with 42 additions and 20 deletions.
62 changes: 42 additions & 20 deletions _search-plugins/knn/knn-vector-quantization.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ has_math: true

By default, the k-NN plugin supports the indexing and querying of vectors of type `float`, where each dimension of the vector occupies 4 bytes of memory. For use cases that require ingestion on a large scale, keeping `float` vectors can be expensive because OpenSearch needs to construct, load, save, and search graphs (for native `nmslib` and `faiss` engines). To reduce the memory footprint, you can use vector quantization.

OpenSearch supports many varieties of quantization. In general, the level of quantization will provide a trade-off between the accuracy of the nearest neighbor search and the size of the memory footprint consumed by the vector search. The supported types include byte vectors, 16-bit scalar quantization, product quantization (PQ) and Binary Quantization(BQ).
OpenSearch supports many varieties of quantization. In general, the level of quantization will provide a trade-off between the accuracy of the nearest neighbor search and the size of the memory footprint consumed by the vector search. The supported types include byte vectors, 16-bit scalar quantization, product quantization (PQ), and binary quantization(BQ).

## Lucene byte vector

Expand Down Expand Up @@ -310,12 +310,13 @@ For example, assume that you have 1 million vectors with a dimension of 256, `iv
```r
1.1*((8 / 8 * 64 + 24) * 1000000 + 100 * (2^8 * 4 * 256 + 4 * 512 * 256)) ~= 0.171 GB
```

## Binary Quantization

Starting with the version 2.17, OpenSearch supports Binary Quantization (BQ) with binary vector support from the Faiss engine. Binary quantization compresses vectors into a binary format (0s and 1s), making it highly efficient in terms of memory usage. You can choose to represent each vector dimension using 1, 2, or 4 bits, depending on the precision you want. One of the advantages of using Binary Quantization (BQ) is that the training process is handled automatically during indexing. This means that no separate training step is required, unlike other quantization techniques such as Product Quantization (PQ).
Starting with the version 2.17, OpenSearch supports binary quantization (BQ) with binary vector support for the Faiss engine. Binary quantization compresses vectors into a binary format (0s and 1s), making it highly efficient in terms of memory usage. You can choose to represent each vector dimension using 1, 2, or 4 bits, depending on the desired precision. One of the advantages of using binary quantization is that the training process is handled automatically during indexing. This means that no separate training step is required, unlike other quantization techniques such as product quantization.

Check warning on line 316 in _search-plugins/knn/knn-vector-quantization.md

View workflow job for this annotation

GitHub Actions / style-job

[vale] reported by reviewdog 🐶 [OpenSearch.UnitsSpacing] Put a space between the number and the units in '0s '. Raw Output: {"message": "[OpenSearch.UnitsSpacing] Put a space between the number and the units in '0s '.", "location": {"path": "_search-plugins/knn/knn-vector-quantization.md", "range": {"start": {"line": 316, "column": 188}}}, "severity": "WARNING"}

### Using Binary quantization
To use Binary Quantization in your k-NN vector index, you can configure it with minimal effort. Below is an example of how you can define a k-NN vector field that utilizes Binary Quantization with the Faiss engine. This configuration provides an out-of-the-box setup with 1-bit binary quantization, using the default values for ef_search and ef_construction set to 100:
### Using binary quantization
To configure binary quantization for the Faiss engine, define a `knn_vector` field and specify the `mode` as `on_disk`. This configuration defaults to 1-bit binary quantization and both `ef_search` and `ef_construction` set to `100`:
```json
PUT my-vector-index
{
Expand All @@ -326,20 +327,24 @@ PUT my-vector-index
"dimension": 8,
"space_type": "l2",
"data_type": "float",
"mode": "on-disk"
"mode": "on_disk"
}
}
}
}

```
To further optimize the configuration, you can specify additional parameters such as the compression level and fine-tune the search parameters. For example, the ef_construction value can be overridden, and you can also define the compression level. The compression level corresponds to the number of bits used for quantization:
{% include copy-curl.html %}

To further optimize the configuration, you can specify additional parameters such as the compression level and fine-tune the search parameters. For example, you can override the `ef_construction` value or define the compression level, which corresponds to the number of bits used for quantization:

- **32x compression** for 1-bit quantization
- **16x compression** for 2-bit quantization
- **8x compression** for 4-bit quantization

This allows for greater control over memory usage and recall performance, providing flexibility to balance between precision and storage efficiency.

To specify the compression level, set the `compression_level` parameter:

```json
PUT my-vector-index
{
Expand All @@ -351,7 +356,7 @@ PUT my-vector-index
"space_type": "l2",
"data_type": "float",
"mode": "on-disk",
"compression_level": "16x", // Can also be 8x or 32x
"compression_level": "16x",
"method": {
"params": {
"ef_construction": 16
Expand All @@ -362,7 +367,8 @@ PUT my-vector-index
}
}
```
To futher fine tune the configuration , below is an example of how to define it with specific parameters of ef_construction , encoder and number of bits.
{% include copy-curl.html %}
The following example futher fine-tunes the configuration by defining `ef_construction` , `encoder` and the number of bits `bits`:

Check failure on line 371 in _search-plugins/knn/knn-vector-quantization.md

View workflow job for this annotation

GitHub Actions / style-job

[vale] reported by reviewdog 🐶 [OpenSearch.Spelling] Error: futher. If you are referencing a setting, variable, format, function, or repository, surround it with tic marks. Raw Output: {"message": "[OpenSearch.Spelling] Error: futher. If you are referencing a setting, variable, format, function, or repository, surround it with tic marks.", "location": {"path": "_search-plugins/knn/knn-vector-quantization.md", "range": {"start": {"line": 371, "column": 23}}}, "severity": "ERROR"}

```json
PUT my-vector-index
Expand Down Expand Up @@ -392,8 +398,11 @@ PUT my-vector-index
}
}
```
### Basic Search with k-NN and Binary Quantization
You can perform a basic k-NN search on your index using a vector and specifying the number of nearest neighbors (k) to return:
{% include copy-curl.html %}

### Search using binary quantized vectors
You can perform a k-NN search on your index by providing a vector and specifying the number of nearest neighbors (k) to return:

```json
GET my-vector-index/_search
{
Expand All @@ -408,8 +417,13 @@ GET my-vector-index/_search
}
}
```
You can also fine-tune the search process by adjusting the ef_search and oversample_factor parameters. Here's an example of how to do this:
- **`oversample_factor`**: This parameter controls the factor by which the search process oversamples the candidate vectors before ranking them. A higher oversample factor means more candidates will be considered before ranking, improving accuracy but also increasing search time. Essentially, it helps trade-off between accuracy and efficiency. For example, setting the `oversample_factor` to `2.0` will double the number of candidates considered during the ranking phase, which may help achieve better results.
{% include copy-curl.html %}

You can also fine-tune search by providing the `ef_search` and `oversample_factor` parameters
The `oversample_factor` parameter controls the factor by which the search oversamples the candidate vectors before ranking them. A higher oversample factor means more candidates will be considered before ranking, improving accuracy but also increasing search time. When selecting the `oversample_factor` value, consider the trade-off between accuracy and efficiency. For example, setting the `oversample_factor` to `2.0` will double the number of candidates considered during the ranking phase, which may help achieve better results.

The following request specifies the `ef_search` and `oversample_factor` parameters:

```json
GET my-vector-index/_search
{
Expand All @@ -429,28 +443,36 @@ GET my-vector-index/_search
}
}
}

```
{% include copy-curl.html %}


#### HNSW memory estimation

The memory required for the Hierarchical Navigable Small World (HNSW) graph can be estimated as `1.1 * (dimension + 8 * M)` bytes/vector, where `M` is the maximum number of bidirectional links created for each element during the construction of the graph.
The memory required for the Hierarchical Navigable Small World (HNSW) graph can be estimated as `1.1 * (dimension + 8 * m)` bytes/vector, where `m` is the maximum number of bidirectional links created for each element during the construction of the graph.

As an example, assume that you have 1 million vectors with a dimension of 256 and M of 16. The memory requirement can be estimated as follows:
As an example, assume that you have 1 million vectors with a dimension of 256 and an `m` of 16. The following sections provide memory requirement estimations for various compression values.

##### 1-bit Quantization (32x Compression)
In this case, each dimension is represented using 1 bit, equivalent to a 32x compression factor.
##### 1-bit quantization (32x compression)
In 1-bit quantization, each dimension is represented using 1 bit, equivalent to a 32x compression factor. The memory requirement can be estimated as follows:

```r
Memory = 1.1 * ((256 * 1 / 8) + 8 * 16) * 1,000,000
~= 0.176 GB
```
##### 2-bit Quantization (16x Compression)

##### 2-bit quantization (16x compression)
In 2-bit quantization, each dimension is represented using 2 bits, equivalent to a 16x compression factor. The memory requirement can be estimated as follows:

```r
Memory = 1.1 * ((256 * 2 / 8) + 8 * 16) * 1,000,000
~= 0.211 GB
```
##### 4-bit Quantization (8x Compression)

##### 4-bit quantization (8x compression)

In 4-bit quantization, each dimension is represented using 4 bits, equivalent to a 8x compression factor. The memory requirement can be estimated as follows:

```r
Memory = 1.1 * ((256 * 4 / 8) + 8 * 16) * 1,000,000
~= 0.282 GB
Expand Down

0 comments on commit 42ebaa1

Please sign in to comment.