Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[DOC] Tokenizers - Character group #8350

Merged
merged 21 commits into from
Jan 2, 2025

Conversation

leanneeliatra
Copy link
Contributor

Description

Addition of the Tokenizer - Character group documentation.
Added in the Analyzers section

Issues Resolved

Part of #1483 addressed in this PR.

Version

all

Frontend features

n/a

Checklist

  • By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license and subject to the Developers Certificate of Origin.
    For more information on following Developer Certificate of Origin and signing off your commits, please check here.

Copy link

Thank you for submitting your PR. The PR states are In progress (or Draft) -> Tech review -> Doc review -> Editorial review -> Merged.

Before you submit your PR for doc review, make sure the content is technically accurate. If you need help finding a tech reviewer, tag a maintainer.

When you're ready for doc review, tag the assignee of this PR. The doc reviewer may push edits to the PR directly or leave comments and editorial suggestions for you to address (let us know in a comment if you have a preference). The doc reviewer will arrange for an editorial review.

@kolchfa-aws kolchfa-aws assigned vagimeli and unassigned kolchfa-aws Sep 23, 2024
@vagimeli
Copy link
Contributor

@udabhas Will you review this PR for technical accuracy, or have a peer review it? Thank you.

@vagimeli vagimeli added 3 - Tech review PR: Tech review in progress Needs SME Waiting on input from subject matter expert analyzers Content gap labels Sep 24, 2024
Copy link
Contributor

@vagimeli vagimeli left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Doc review complete

The following response shows that the specified characters have been removed:

```
Fast cars drive fast
Copy link

@udabhas udabhas Dec 9, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This returns fast!

all tokens -

Fast cars drive fast!


The character group tokenizer accepts the following parameters:

1. `tokenize_on_chars`: Specifies a set of characters on which the text should be tokenized. The tokenizer creates a new token upon encountering any character from the specified set, for example, single characters `(e.g., -, @)` and character classes such as `whitespace`, `letter`, `digit`, `punctuation`, and `symbol`.
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

also would be good to add that this tokenizer accepts escape characters

kolchfa-aws and others added 4 commits January 2, 2025 10:59
Signed-off-by: Fanit Kolchina <[email protected]>
Signed-off-by: Fanit Kolchina <[email protected]>
Signed-off-by: Fanit Kolchina <[email protected]>
Copy link
Collaborator

@natebower natebower left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@kolchfa-aws kolchfa-aws assigned kolchfa-aws and unassigned vagimeli Jan 2, 2025
@kolchfa-aws kolchfa-aws added the backport 2.18 PR: Backport label for 2.18 label Jan 2, 2025
@kolchfa-aws kolchfa-aws merged commit b52ec2f into opensearch-project:main Jan 2, 2025
7 checks passed
opensearch-trigger-bot bot pushed a commit that referenced this pull request Jan 2, 2025
* finished tokenizers example

Signed-off-by: [email protected] <[email protected]>

* updating nav order

Signed-off-by: [email protected] <[email protected]>

* layout cleanup

Signed-off-by: [email protected] <[email protected]>

* grammar fix

Signed-off-by: [email protected] <[email protected]>

* doc: small update for page numbers

Signed-off-by: [email protected] <[email protected]>

* layout fix: correct scentence case for all examples

Signed-off-by: [email protected] <[email protected]>

* small update: adding copy tag for json segment

Signed-off-by: [email protected] <[email protected]>

* Update _analyzers/tokenizers/character-group-tokenizer.md

Signed-off-by: Melissa Vagi <[email protected]>

* Update _analyzers/tokenizers/character-group-tokenizer.md

Signed-off-by: Melissa Vagi <[email protected]>

* Update _analyzers/tokenizers/character-group-tokenizer.md

Signed-off-by: Melissa Vagi <[email protected]>

* Apply suggestions from code review

Co-authored-by: Melissa Vagi <[email protected]>
Signed-off-by: leanneeliatra <[email protected]>

* Doc review

Signed-off-by: Fanit Kolchina <[email protected]>

* Reorder index

Signed-off-by: Fanit Kolchina <[email protected]>

* Add escape characters

Signed-off-by: Fanit Kolchina <[email protected]>

---------

Signed-off-by: [email protected] <[email protected]>
Signed-off-by: Melissa Vagi <[email protected]>
Signed-off-by: leanneeliatra <[email protected]>
Signed-off-by: Fanit Kolchina <[email protected]>
Co-authored-by: Melissa Vagi <[email protected]>
Co-authored-by: Fanit Kolchina <[email protected]>
Co-authored-by: kolchfa-aws <[email protected]>
(cherry picked from commit b52ec2f)
Signed-off-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
3 - Tech review PR: Tech review in progress analyzers backport 2.18 PR: Backport label for 2.18 Content gap Needs SME Waiting on input from subject matter expert
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants