Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add a guides for generating embeddings with Mistral, OpenAI, Voyage, and Cloudflare #2965

Merged
merged 22 commits into from
Sep 3, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
22 commits
Select commit Hold shift + click to select a range
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
25 changes: 25 additions & 0 deletions config/sidebar-guides.json
Original file line number Diff line number Diff line change
Expand Up @@ -64,6 +64,31 @@
"source": "guides/computing_hugging_face_embeddings_gpu.mdx",
"label": "Computing Hugging Face embeddings with the GPU",
"slug": "computing_hugging_face_embeddings_gpu"
},
{
"source": "guides/embedders/cloudflare.mdx",
"label": "Semantic search with Cloudflare embeddings",
"slug": "cloudflare"
},
{
"source": "guides/embedders/cohere.mdx",
"label": "Semantic search with Cohere embeddings",
"slug": "cohere"
},
{
"source": "guides/embedders/mistral.mdx",
"label": "Semantic search with Mistral embeddings",
"slug": "mistral"
},
{
"source": "guides/embedders/openai.mdx",
"label": "Semantic search with OpenAI embeddings",
"slug": "openai"
},
{
"source": "guides/embedders/voyage.mdx",
"label": "Semantic search with Voyage embeddings",
"slug": "voyage"
}
]
},
Expand Down
99 changes: 99 additions & 0 deletions guides/embedders/cloudflare.mdx
Original file line number Diff line number Diff line change
@@ -0,0 +1,99 @@
---
title: Semantic Search with Cloudflare Worker AI Embeddings - Meilisearch documentation
description: This guide will walk you through the process of setting up Meilisearch with Cloudflare Worker AI embeddings to enable semantic search capabilities.
---

# Semantic search with Cloudflare Worker AI embeddings

## Introduction

This guide will walk you through the process of setting up Meilisearch with Cloudflare Worker AI embeddings to enable semantic search capabilities. By leveraging Meilisearch's AI features and Cloudflare Worker AI's embedding API, you can enhance your search experience and retrieve more relevant results.

## Requirements

To follow this guide, you'll need:

- A [Meilisearch Cloud](https://www.meilisearch.com/cloud?utm_campaign=vector-search&utm_source=docs&utm_medium=cloudflare-embeddings-guide) project running version 1.10 or above with the Vector store activated.
- A Cloudflare account with access to Worker AI and an API key. You can sign up for a Cloudflare account at [Cloudflare](https://www.cloudflare.com/).
- Your Cloudflare account ID.
- No backend required.

## Setting up Meilisearch

To set up an embedder in Meilisearch, you need to configure it to your settings. You can refer to the [Meilisearch documentation](/reference/api/settings?utm_campaign=vector-search&utm_source=docs&utm_medium=cloudflare-embeddings-guide#update-embedder-settings) for more details on updating the embedder settings.

Cloudflare Worker AI offers the following embedding models:

- `baai/bge-base-en-v1.5`: 768 dimensions
- `baai/bge-large-en-v1.5`: 1024 dimensions
- `baai/bge-small-en-v1.5`: 384 dimensions

Here's an example of embedder settings for Cloudflare Worker AI:

```json
{
"cloudflare": {
"source": "rest",
"apiKey": "<API Key>",
"dimensions": 384,
"documentTemplate": "<Custom template (Optional, but recommended)>",
"url": "https://api.cloudflare.com/client/v4/accounts/<ACCOUNT_ID>/ai/run/@cf/<Model>",
"request": {
"text": ["{{text}}", "{{..}}"]
},
"response": {
"result": {
"data": ["{{embedding}}", "{{..}}"]
}
}
}
}
```

In this configuration:

- `source`: Specifies the source of the embedder, which is set to "rest" for using a REST API.
- `apiKey`: Replace `<API Key>` with your actual Cloudflare API key.
- `dimensions`: Specifies the dimensions of the embeddings. Set to 384 for `baai/bge-small-en-v1.5`, 768 for `baai/bge-base-en-v1.5`, or 1024 for `baai/bge-large-en-v1.5`.
- `documentTemplate`: Optionally, you can provide a [custom template](/learn/ai-powered-search/getting_started_with_ai_search?utm_campaign=vector-search&utm_source=docs&utm_medium=cloudflare-embeddings-guide#documenttemplate) for generating embeddings from your documents.
- `url`: Specifies the URL of the Cloudflare Worker AI API endpoint.
- `request`: Defines the request structure for the Cloudflare Worker AI API, including the input parameters.
- `response`: Defines the expected response structure from the Cloudflare Worker AI API, including the embedding data.

Be careful when setting up the `url` field in your configuration. The URL contains your Cloudflare account ID (`<ACCOUNT_ID>`) and the specific model you want to use (`<Model>`). Make sure to replace these placeholders with your actual account ID and the desired model name (e.g., `baai/bge-small-en-v1.5`).

Once you've configured the embedder settings, Meilisearch will automatically generate embeddings for your documents and store them in the vector store.

Please note that Cloudflare may have rate limiting, which is managed by Meilisearch. If you have a free account, the indexation process may take some time, but Meilisearch will handle it with a retry strategy.

It's recommended to monitor the tasks queue to ensure everything is running smoothly. You can access the tasks queue using the Cloud UI or the [Meilisearch API](/reference/api/tasks?utm_campaign=vector-search&utm_source=docs&utm_medium=cloudflare-embeddings-guide#get-tasks).

## Testing semantic search

With the embedder set up, you can now perform semantic searches using Meilisearch. When you send a search query, Meilisearch will generate an embedding for the query using the configured embedder and then use it to find the most semantically similar documents in the vector store.
To perform a semantic search, you simply need to make a normal search request but include the hybrid parameter:

```json
{
"q": "<Query made by the user>",
"hybrid": {
"semanticRatio": 1,
"embedder": "cloudflare"
}
}
```

In this request:

- `q`: Represents the user's search query.
- `hybrid`: Specifies the configuration for the hybrid search.
- `semanticRatio`: Allows you to control the balance between semantic search and traditional search. A value of 1 indicates pure semantic search, while a value of 0 represents full-text search. You can adjust this parameter to achieve a hybrid search experience.
- `embedder`: The name of the embedder used for generating embeddings. Make sure to use the same name as specified in the embedder configuration, which in this case is "cf-bge-small-en-v1.5".

You can use the Meilisearch API or client libraries to perform searches and retrieve the relevant documents based on semantic similarity.

## Conclusion

By following this guide, you should now have Meilisearch set up with Cloudflare Worker AI embedding, enabling you to leverage semantic search capabilities in your application. Meilisearch's auto-batching and efficient handling of embeddings make it a powerful choice for integrating semantic search into your project.

To explore further configuration options for embedders, consult the [detailed documentation about the embedder setting possibilities](/reference/api/settings?utm_campaign=vector-search&utm_source=docs&utm_medium=cloudflare-embeddings-guide#embedders-experimental).
101 changes: 101 additions & 0 deletions guides/embedders/cohere.mdx
Original file line number Diff line number Diff line change
@@ -0,0 +1,101 @@
---
title: Semantic Search with Cohere Embeddings - Meilisearch documentation
description: This guide will walk you through the process of setting up Meilisearch with Cohere embeddings to enable semantic search capabilities.
---

# Semantic search with Cohere embeddings

## Introduction

This guide will walk you through the process of setting up Meilisearch with Cohere embeddings to enable semantic search capabilities. By leveraging Meilisearch's AI features and Cohere's embedding API, you can enhance your search experience and retrieve more relevant results.

## Requirements

To follow this guide, you'll need:

- A [Meilisearch Cloud](https://www.meilisearch.com/cloud?utm_campaign=vector-search&utm_source=docs&utm_medium=cohere-embeddings-guide) project running version 1.10 or above with the Vector store activated.
- A Cohere account with an API key for embedding generation. You can sign up for a Cohere account at [Cohere](https://cohere.com/).
- No backend required.

## Setting up Meilisearch

To set up an embedder in Meilisearch, you need to configure it to your settings. You can refer to the [Meilisearch documentation](/reference/api/settings?utm_campaign=vector-search&utm_source=docs&utm_medium=cohere-embeddings-guide#update-embedder-settings) for more details on updating the embedder settings.

Cohere offers multiple embedding models:
- `embed-english-v3.0` and `embed-multilingual-v3.0`: 1024 dimensions
- `embed-english-light-v3.0` and `embed-multilingual-light-v3.0`: 384 dimensions

Here's an example of embedder settings for Cohere:

```json
{
"cohere": {
"source": "rest",
"apiKey": "<Cohere API Key>",
"dimensions": 1024,
"documentTemplate": "<Custom template (Optional, but recommended)>",
"url": "https://api.cohere.com/v1/embed",
"request": {
"model": "embed-english-v3.0",
"texts": [
"{{text}}",
"{{..}}"
],
"input_type": "search_document"
},
"response": {
"embeddings": [
"{{embedding}}",
"{{..}}"
]
},
}
}
```

In this configuration:

- `source`: Specifies the source of the embedder, which is set to "rest" for using a REST API.
- `apiKey`: Replace `<Cohere API Key>` with your actual Cohere API key.
- `dimensions`: Specifies the dimensions of the embeddings, set to 1024 for the `embed-english-v3.0` model.
- `documentTemplate`: Optionally, you can provide a [custom template](/learn/ai-powered-search/getting_started_with_ai_search?utm_campaign=vector-search&utm_source=docs&utm_medium=cohere-embeddings-guide#documenttemplate) for generating embeddings from your documents.
- `url`: Specifies the URL of the Cohere API endpoint.
- `request`: Defines the request structure for the Cohere API, including the model name and input parameters.
- `response`: Defines the expected response structure from the Cohere API, including the embedding data.

Once you've configured the embedder settings, Meilisearch will automatically generate embeddings for your documents and store them in the vector store.

Please note that most third-party tools have rate limiting, which is managed by Meilisearch. If you have a free account, the indexation process may take some time, but Meilisearch will handle it with a retry strategy.

It's recommended to monitor the tasks queue to ensure everything is running smoothly. You can access the tasks queue using the Cloud UI or the [Meilisearch API](https://www.meilisearch.com/docs/reference/api/tasks?utm_campaign=vector-search&utm_source=docs&utm_medium=cohere-embeddings-guide#get-tasks).

## Testing semantic search

With the embedder set up, you can now perform semantic searches using Meilisearch. When you send a search query, Meilisearch will generate an embedding for the query using the configured embedder and then use it to find the most semantically similar documents in the vector store.
To perform a semantic search, you simply need to make a normal search request but include the hybrid parameter:

```json
{
"q": "<Query made by the user>",
"hybrid": {
"semanticRatio": 1,
"embedder": "cohere"
}
}
```

In this request:

- `q`: Represents the user's search query.
- `hybrid`: Specifies the configuration for the hybrid search.
- `semanticRatio`: Allows you to control the balance between semantic search and traditional search. A value of 1 indicates pure semantic search, while a value of 0 represents full-text search. You can adjust this parameter to achieve a hybrid search experience.
- `embedder`: The name of the embedder used for generating embeddings. Make sure to use the same name as specified in the embedder configuration, which in this case is "cohere".

You can use the Meilisearch API or client libraries to perform searches and retrieve the relevant documents based on semantic similarity.

## Conclusion

By following this guide, you should now have Meilisearch set up with Cohere embedding, enabling you to leverage semantic search capabilities in your application. Meilisearch's auto-batching and efficient handling of embeddings make it a powerful choice for integrating semantic search into your project.

To explore further configuration options for embedders, consult the [detailed documentation about the embedder setting possibilities](/reference/api/settings?utm_campaign=vector-search&utm_source=docs&utm_medium=cohere-embeddings-guide#embedders-experimental).

97 changes: 97 additions & 0 deletions guides/embedders/mistral.mdx
Original file line number Diff line number Diff line change
@@ -0,0 +1,97 @@
---
title: Semantic Search with Mistral Embeddings - Meilisearch documentation
description: This guide will walk you through the process of setting up Meilisearch with Mistral embeddings to enable semantic search capabilities.
---

# Semantic search with Mistral embeddings

## Introduction

This guide will walk you through the process of setting up Meilisearch with Mistral embeddings to enable semantic search capabilities. By leveraging Meilisearch's AI features and Mistral's embedding API, you can enhance your search experience and retrieve more relevant results.

## Requirements

To follow this guide, you'll need:

- A [Meilisearch Cloud](https://www.meilisearch.com/cloud?utm_campaign=vector-search&utm_source=docs&utm_medium=mistral-embeddings-guide) project running version 1.10 or above with the Vector store activated.
- A Mistral account with an API key for embedding generation. You can sign up for a Mistral account at [Mistral](https://mistral.ai/).
- No backend required.

## Setting up Meilisearch

To set up an embedder in Meilisearch, you need to configure it to your settings. You can refer to the [Meilisearch documentation](/reference/api/settings?utm_campaign=vector-search&utm_source=docs&utm_medium=mistral-embeddings-guide#update-embedder-settings) for more details on updating the embedder settings.

While using Mistral to generate embeddings, you'll need to use the model `mistral-embed`. Unlike some other services, Mistral currently offers only one embedding model.

Here's an example of embedder settings for Mistral:

```json
{
"mistral": {
"source": "rest",
"apiKey": "<Mistral API Key>",
"dimensions": 1024,
"documentTemplate": "<Custom template (Optional, but recommended)>",
"url": "https://api.mistral.ai/v1/embeddings",
"request": {
"model": "mistral-embed",
qdequele marked this conversation as resolved.
Show resolved Hide resolved
"input": ["{{text}}", "{{..}}"]
},
"response": {
"data": [
{
"embedding": "{{embedding}}"
},
"{{..}}"
]
}
}
}
```

In this configuration:

- `source`: Specifies the source of the embedder, which is set to "rest" for using a REST API.
- `apiKey`: Replace `<Mistral API Key>` with your actual Mistral API key.
- `dimensions`: Specifies the dimensions of the embeddings, set to 1024 for the `mistral-embed` model.
- `documentTemplate`: Optionally, you can provide a [custom template](/learn/ai-powered-search/getting_started_with_ai_search?utm_campaign=vector-search&utm_source=docs&utm_medium=mistral-embeddings-guide#documenttemplate) for generating embeddings from your documents.
- `url`: Specifies the URL of the Mistral API endpoint.
- `request`: Defines the request structure for the Mistral API, including the model name and input parameters.
- `response`: Defines the expected response structure from the Mistral API, including the embedding data.

Once you've configured the embedder settings, Meilisearch will automatically generate embeddings for your documents and store them in the vector store.

Please note that most third-party tools have rate limiting, which is managed by Meilisearch. If you have a free account, the indexation process may take some time, but Meilisearch will handle it with a retry strategy.

It's recommended to monitor the tasks queue to ensure everything is running smoothly. You can access the tasks queue using the Cloud UI or the [Meilisearch API](/reference/api/tasks?utm_campaign=vector-search&utm_source=docs&utm_medium=mistral-embeddings-guide#get-tasks)

## Testing semantic search

With the embedder set up, you can now perform semantic searches using Meilisearch. When you send a search query, Meilisearch will generate an embedding for the query using the configured embedder and then use it to find the most semantically similar documents in the vector store.
To perform a semantic search, you simply need to make a normal search request but include the hybrid parameter:

```json
{
"q": "<Query made by the user>",
"hybrid": {
"semanticRatio": 1,
"embedder": "mistral"
}
}
```

In this request:

- `q`: Represents the user's search query.
- `hybrid`: Specifies the configuration for the hybrid search.
- `semanticRatio`: Allows you to control the balance between semantic search and traditional search. A value of 1 indicates pure semantic search, while a value of 0 represents full-text search. You can adjust this parameter to achieve a hybrid search experience.
- `embedder`: The name of the embedder used for generating embeddings. Make sure to use the same name as specified in the embedder configuration, which in this case is "mistral".

You can use the Meilisearch API or client libraries to perform searches and retrieve the relevant documents based on semantic similarity.

## Conclusion

By following this guide, you should now have Meilisearch set up with Mistral embedding, enabling you to leverage semantic search capabilities in your application. Meilisearch's auto-batching and efficient handling of embeddings make it a powerful choice for integrating semantic search into your project.
qdequele marked this conversation as resolved.
Show resolved Hide resolved

To explore further configuration options for embedders, consult the [detailed documentation about the embedder setting possibilities](/reference/api/settings?utm_campaign=vector-search&utm_source=docs&utm_medium=mistral-embeddings-guide#embedders-experimental).

Loading
Loading