-
Notifications
You must be signed in to change notification settings - Fork 508
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
Merge branch 'main' into adding-Multiplexer-token-filter-docs
Signed-off-by: kolchfa-aws <[email protected]>
- Loading branch information
Showing
5 changed files
with
418 additions
and
4 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,137 @@ | ||
--- | ||
layout: default | ||
title: N-gram | ||
parent: Token filters | ||
nav_order: 290 | ||
--- | ||
|
||
# N-gram token filter | ||
|
||
The `ngram` token filter is a powerful tool used to break down text into smaller components, known as _n-grams_, which can improve partial matching and fuzzy search capabilities. It works by splitting a token into smaller substrings of defined lengths. These filters are commonly used in search applications to support autocomplete, partial matches, and typo-tolerant search. For more information, see [Autocomplete functionality]({{site.url}}{{site.baseurl}}/search-plugins/searching-data/autocomplete/) and [Did-you-mean]({{site.url}}{{site.baseurl}}/search-plugins/searching-data/did-you-mean/). | ||
|
||
## Parameters | ||
|
||
The `ngram` token filter can be configured with the following parameters. | ||
|
||
Parameter | Required/Optional | Data type | Description | ||
:--- | :--- | :--- | :--- | ||
`min_gram` | Optional | Integer | The minimum length of the n-grams. Default is `1`. | ||
`max_gram` | Optional | Integer | The maximum length of the n-grams. Default is `2`. | ||
`preserve_original` | Optional | Boolean | Whether to keep the original token as one of the outputs. Default is `false`. | ||
|
||
## Example | ||
|
||
The following example request creates a new index named `ngram_example_index` and configures an analyzer with an `ngram` filter: | ||
|
||
```json | ||
PUT /ngram_example_index | ||
{ | ||
"settings": { | ||
"analysis": { | ||
"filter": { | ||
"ngram_filter": { | ||
"type": "ngram", | ||
"min_gram": 2, | ||
"max_gram": 3 | ||
} | ||
}, | ||
"analyzer": { | ||
"ngram_analyzer": { | ||
"type": "custom", | ||
"tokenizer": "standard", | ||
"filter": [ | ||
"lowercase", | ||
"ngram_filter" | ||
] | ||
} | ||
} | ||
} | ||
} | ||
} | ||
``` | ||
{% include copy-curl.html %} | ||
|
||
## Generated tokens | ||
|
||
Use the following request to examine the tokens generated using the analyzer: | ||
|
||
```json | ||
POST /ngram_example_index/_analyze | ||
{ | ||
"analyzer": "ngram_analyzer", | ||
"text": "Search" | ||
} | ||
``` | ||
{% include copy-curl.html %} | ||
|
||
The response contains the generated tokens: | ||
|
||
```json | ||
{ | ||
"tokens": [ | ||
{ | ||
"token": "se", | ||
"start_offset": 0, | ||
"end_offset": 6, | ||
"type": "<ALPHANUM>", | ||
"position": 0 | ||
}, | ||
{ | ||
"token": "sea", | ||
"start_offset": 0, | ||
"end_offset": 6, | ||
"type": "<ALPHANUM>", | ||
"position": 0 | ||
}, | ||
{ | ||
"token": "ea", | ||
"start_offset": 0, | ||
"end_offset": 6, | ||
"type": "<ALPHANUM>", | ||
"position": 0 | ||
}, | ||
{ | ||
"token": "ear", | ||
"start_offset": 0, | ||
"end_offset": 6, | ||
"type": "<ALPHANUM>", | ||
"position": 0 | ||
}, | ||
{ | ||
"token": "ar", | ||
"start_offset": 0, | ||
"end_offset": 6, | ||
"type": "<ALPHANUM>", | ||
"position": 0 | ||
}, | ||
{ | ||
"token": "arc", | ||
"start_offset": 0, | ||
"end_offset": 6, | ||
"type": "<ALPHANUM>", | ||
"position": 0 | ||
}, | ||
{ | ||
"token": "rc", | ||
"start_offset": 0, | ||
"end_offset": 6, | ||
"type": "<ALPHANUM>", | ||
"position": 0 | ||
}, | ||
{ | ||
"token": "rch", | ||
"start_offset": 0, | ||
"end_offset": 6, | ||
"type": "<ALPHANUM>", | ||
"position": 0 | ||
}, | ||
{ | ||
"token": "ch", | ||
"start_offset": 0, | ||
"end_offset": 6, | ||
"type": "<ALPHANUM>", | ||
"position": 0 | ||
} | ||
] | ||
} | ||
``` |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,97 @@ | ||
--- | ||
layout: default | ||
title: Pattern capture | ||
parent: Token filters | ||
nav_order: 310 | ||
--- | ||
|
||
# Pattern capture token filter | ||
|
||
The `pattern_capture` token filter is a powerful filter that uses regular expressions to capture and extract parts of text according to specific patterns. This filter can be useful when you want to extract particular parts of tokens, such as email domains, hashtags, or numbers, and reuse them for further analysis or indexing. | ||
|
||
## Parameters | ||
|
||
The `pattern_capture` token filter can be configured with the following parameters. | ||
|
||
Parameter | Required/Optional | Data type | Description | ||
:--- | :--- | :--- | :--- | ||
`patterns` | Required | Array of strings | An array of regular expressions used to capture parts of text. | ||
`preserve_original` | Required | Boolean| Whether to keep the original token in the output. Default is `true`. | ||
|
||
|
||
## Example | ||
|
||
The following example request creates a new index named `email_index` and configures an analyzer with a `pattern_capture` filter to extract the local part and domain name from an email address: | ||
|
||
```json | ||
PUT /email_index | ||
{ | ||
"settings": { | ||
"analysis": { | ||
"filter": { | ||
"email_pattern_capture": { | ||
"type": "pattern_capture", | ||
"preserve_original": true, | ||
"patterns": [ | ||
"^([^@]+)", | ||
"@(.+)$" | ||
] | ||
} | ||
}, | ||
"analyzer": { | ||
"email_analyzer": { | ||
"tokenizer": "uax_url_email", | ||
"filter": [ | ||
"email_pattern_capture", | ||
"lowercase" | ||
] | ||
} | ||
} | ||
} | ||
} | ||
} | ||
``` | ||
{% include copy-curl.html %} | ||
|
||
## Generated tokens | ||
|
||
Use the following request to examine the tokens generated using the analyzer: | ||
|
||
```json | ||
POST /email_index/_analyze | ||
{ | ||
"text": "[email protected]", | ||
"analyzer": "email_analyzer" | ||
} | ||
``` | ||
{% include copy-curl.html %} | ||
|
||
The response contains the generated tokens: | ||
|
||
```json | ||
{ | ||
"tokens": [ | ||
{ | ||
"token": "[email protected]", | ||
"start_offset": 0, | ||
"end_offset": 20, | ||
"type": "<EMAIL>", | ||
"position": 0 | ||
}, | ||
{ | ||
"token": "john.doe", | ||
"start_offset": 0, | ||
"end_offset": 20, | ||
"type": "<EMAIL>", | ||
"position": 0 | ||
}, | ||
{ | ||
"token": "example.com", | ||
"start_offset": 0, | ||
"end_offset": 20, | ||
"type": "<EMAIL>", | ||
"position": 0 | ||
} | ||
] | ||
} | ||
``` |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,98 @@ | ||
--- | ||
layout: default | ||
title: Phonetic | ||
parent: Token filters | ||
nav_order: 330 | ||
--- | ||
|
||
# Phonetic token filter | ||
|
||
The `phonetic` token filter transforms tokens into their phonetic representations, enabling more flexible matching of words that sound similar but are spelled differently. This is particularly useful for searching names, brands, or other entities that users might spell differently but pronounce similarly. | ||
|
||
The `phonetic` token filter is not included in OpenSearch distributions by default. To use this token filter, you must first install the `analysis-phonetic` plugin as follows and then restart OpenSearch: | ||
|
||
```bash | ||
./bin/opensearch-plugin install analysis-phonetic | ||
``` | ||
{% include copy.html %} | ||
|
||
For more information about installing plugins, see [Installing plugins]({{site.url}}{{site.baseurl}}/install-and-configure/plugins/). | ||
{: .note} | ||
|
||
## Parameters | ||
|
||
The `phonetic` token filter can be configured with the following parameters. | ||
|
||
Parameter | Required/Optional | Data type | Description | ||
:--- | :--- | :--- | :--- | ||
`encoder` | Optional | String | Specifies the phonetic algorithm to use.<br><br>Valid values are:<br>- `metaphone` (default)<br>- `double_metaphone`<br>- `soundex`<br>- `refined_soundex`<br>- `caverphone1`<br>- `caverphone2`<br>- `cologne`<br>- `nysiis`<br>- `koelnerphonetik`<br>- `haasephonetik`<br>- `beider_morse`<br>- `daitch_mokotoff ` | ||
`replace` | Optional | Boolean | Whether to replace the original token. If `false`, the original token is included in the output along with the phonetic encoding. Default is `true`. | ||
|
||
|
||
## Example | ||
|
||
The following example request creates a new index named `names_index` and configures an analyzer with a `phonetic` filter: | ||
|
||
```json | ||
PUT /names_index | ||
{ | ||
"settings": { | ||
"analysis": { | ||
"filter": { | ||
"my_phonetic_filter": { | ||
"type": "phonetic", | ||
"encoder": "double_metaphone", | ||
"replace": true | ||
} | ||
}, | ||
"analyzer": { | ||
"phonetic_analyzer": { | ||
"tokenizer": "standard", | ||
"filter": [ | ||
"my_phonetic_filter" | ||
] | ||
} | ||
} | ||
} | ||
} | ||
} | ||
``` | ||
{% include copy-curl.html %} | ||
|
||
## Generated tokens | ||
|
||
Use the following request to examine the tokens generated for the names `Stephen` and `Steven` using the analyzer: | ||
|
||
```json | ||
POST /names_index/_analyze | ||
{ | ||
"text": "Stephen", | ||
"analyzer": "phonetic_analyzer" | ||
} | ||
``` | ||
{% include copy-curl.html %} | ||
|
||
```json | ||
POST /names_index/_analyze | ||
{ | ||
"text": "Steven", | ||
"analyzer": "phonetic_analyzer" | ||
} | ||
``` | ||
{% include copy-curl.html %} | ||
|
||
In both cases, the response contains the same generated token: | ||
|
||
```json | ||
{ | ||
"tokens": [ | ||
{ | ||
"token": "STFN", | ||
"start_offset": 0, | ||
"end_offset": 6, | ||
"type": "<ALPHANUM>", | ||
"position": 0 | ||
} | ||
] | ||
} | ||
``` |
Oops, something went wrong.