Skip to content

Commit

Permalink
add whitespace tokenizer docs
Browse files Browse the repository at this point in the history
Signed-off-by: Anton Rubin <[email protected]>
  • Loading branch information
AntonEliatra committed Oct 9, 2024
1 parent 76486a4 commit 47f85a2
Show file tree
Hide file tree
Showing 2 changed files with 106 additions and 1 deletion.
2 changes: 1 addition & 1 deletion _analyzers/tokenizers/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@
layout: default
title: Tokenizers
nav_order: 60
has_children: false
has_children: true
has_toc: false
---

Expand Down
105 changes: 105 additions & 0 deletions _analyzers/tokenizers/whitespace.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,105 @@
---
layout: default
title: Whitespace tokenizer
parent: Tokenizers
nav_order: 160
---

# Whitespace tokenizer

The `whitespace` tokenizer splits text based purely on whitespace characters (like spaces, tabs, and newlines). It treats each word separated by whitespace as a token, without performing any additional analysis or normalization like lowercasing or punctuation removal.

## Example usage

The following example request creates a new index named `my_index` and configures an analyzer with `whitespace` tokenizer:

```
PUT /my_index
{
"settings": {
"analysis": {
"tokenizer": {
"whitespace_tokenizer": {
"type": "whitespace"
}
},
"analyzer": {
"my_whitespace_analyzer": {
"type": "custom",
"tokenizer": "whitespace_tokenizer"
}
}
}
},
"mappings": {
"properties": {
"content": {
"type": "text",
"analyzer": "my_whitespace_analyzer"
}
}
}
}
```

## Generated tokens

Use the following request to examine the tokens generated using the created analyzer:

```json
POST /my_index/_analyze
{
"analyzer": "my_whitespace_analyzer",
"text": "OpenSearch is fast! Really fast."
}
```
{% include copy-curl.html %}

The response contains the generated tokens:

```json
{
"tokens": [
{
"token": "OpenSearch",
"start_offset": 0,
"end_offset": 10,
"type": "word",
"position": 0
},
{
"token": "is",
"start_offset": 11,
"end_offset": 13,
"type": "word",
"position": 1
},
{
"token": "fast!",
"start_offset": 14,
"end_offset": 19,
"type": "word",
"position": 2
},
{
"token": "Really",
"start_offset": 20,
"end_offset": 26,
"type": "word",
"position": 3
},
{
"token": "fast.",
"start_offset": 27,
"end_offset": 32,
"type": "word",
"position": 4
}
]
}
```

## Configuration

The `whitespace` tokenizer can be configured with parameter `max_token_length` which sets the maximum length of the produced token. If this length is exceeded, the token is split into multiple tokens at length configured in `max_token_length`. Default is `255` (Integer, _Optional_)

0 comments on commit 47f85a2

Please sign in to comment.