From c04c58d4f5d78e1d817d9655f37b9cc3c38e221f Mon Sep 17 00:00:00 2001 From: leanneeliatra <131779422+leanneeliatra@users.noreply.github.com> Date: Wed, 9 Oct 2024 13:58:32 +0100 Subject: [PATCH] Update keyword-tokenizers.md Signed-off-by: leanneeliatra <131779422+leanneeliatra@users.noreply.github.com> --- _analyzers/tokenizers/keyword-tokenizers.md | 1 + 1 file changed, 1 insertion(+) diff --git a/_analyzers/tokenizers/keyword-tokenizers.md b/_analyzers/tokenizers/keyword-tokenizers.md index 0110f11c0e..dd5a135ce2 100644 --- a/_analyzers/tokenizers/keyword-tokenizers.md +++ b/_analyzers/tokenizers/keyword-tokenizers.md @@ -12,6 +12,7 @@ The keyword tokenizer is a straightforward tokenizer that takes in text and outp The keyword tokenizer can be paired with token filters to modify or clean up the text, such as normalizing the data or removing unnecessary characters. ## Example + By analyzing the text "OpenSearch Example", we can see the text is preserved: ```