-
Notifications
You must be signed in to change notification settings - Fork 507
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
Merge branch 'main' into fixing-layout-of-enabling-security-plugin
- Loading branch information
Showing
45 changed files
with
649 additions
and
19 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,160 @@ | ||
--- | ||
layout: default | ||
title: CJK bigram | ||
parent: Token filters | ||
nav_order: 30 | ||
--- | ||
|
||
# CJK bigram token filter | ||
|
||
The `cjk_bigram` token filter is designed specifically for processing East Asian languages, such as Chinese, Japanese, and Korean (CJK), which typically don't use spaces to separate words. A bigram is a sequence of two adjacent elements in a string of tokens, which can be characters or words. For CJK languages, bigrams help approximate word boundaries and capture significant character pairs that can convey meaning. | ||
|
||
|
||
## Parameters | ||
|
||
The `cjk_bigram` token filter can be configured with two parameters: `ignore_scripts`and `output_unigrams`. | ||
|
||
### `ignore_scripts` | ||
|
||
The `cjk-bigram` token filter ignores all non-CJK scripts (writing systems like Latin or Cyrillic) and tokenizes only CJK text into bigrams. Use this option to specify CJK scripts to be ignored. This option takes the following valid values: | ||
|
||
- `han`: The `han` script processes Han characters. [Han characters](https://simple.wikipedia.org/wiki/Chinese_characters) are logograms used in the written languages of China, Japan, and Korea. The filter can help with text processing tasks like tokenizing, normalizing, or stemming text written in Chinese, Japanese kanji, or Korean Hanja. | ||
|
||
- `hangul`: The `hangul` script processes Hangul characters, which are unique to the Korean language and do not exist in other East Asian scripts. | ||
|
||
- `hiragana`: The `hiragana` script processes hiragana, one of the two syllabaries used in the Japanese writing system. | ||
Hiragana is typically used for native Japanese words, grammatical elements, and certain forms of punctuation. | ||
|
||
- `katakana`: The `katakana` script processes katakana, the other Japanese syllabary. | ||
Katakana is mainly used for foreign loanwords, onomatopoeia, scientific names, and certain Japanese words. | ||
|
||
|
||
### `output_unigrams` | ||
|
||
This option, when set to `true`, outputs both unigrams (single characters) and bigrams. Default is `false`. | ||
|
||
## Example | ||
|
||
The following example request creates a new index named `devanagari_example_index` and defines an analyzer with the `cjk_bigram_filter` filter and `ignored_scripts` parameter set to `katakana`: | ||
|
||
```json | ||
PUT /cjk_bigram_example | ||
{ | ||
"settings": { | ||
"analysis": { | ||
"analyzer": { | ||
"cjk_bigrams_no_katakana": { | ||
"tokenizer": "standard", | ||
"filter": [ "cjk_bigrams_no_katakana_filter" ] | ||
} | ||
}, | ||
"filter": { | ||
"cjk_bigrams_no_katakana_filter": { | ||
"type": "cjk_bigram", | ||
"ignored_scripts": [ | ||
"katakana" | ||
], | ||
"output_unigrams": true | ||
} | ||
} | ||
} | ||
} | ||
} | ||
``` | ||
{% include copy-curl.html %} | ||
|
||
## Generated tokens | ||
|
||
Use the following request to examine the tokens generated using the analyzer: | ||
|
||
```json | ||
POST /cjk_bigram_example/_analyze | ||
{ | ||
"analyzer": "cjk_bigrams_no_katakana", | ||
"text": "東京タワーに行く" | ||
} | ||
``` | ||
{% include copy-curl.html %} | ||
|
||
Sample text: "東京タワーに行く" | ||
|
||
東京 (Kanji for "Tokyo") | ||
タワー (Katakana for "Tower") | ||
に行く (Hiragana and Kanji for "go to") | ||
|
||
The response contains the generated tokens: | ||
|
||
```json | ||
{ | ||
"tokens": [ | ||
{ | ||
"token": "東", | ||
"start_offset": 0, | ||
"end_offset": 1, | ||
"type": "<SINGLE>", | ||
"position": 0 | ||
}, | ||
{ | ||
"token": "東京", | ||
"start_offset": 0, | ||
"end_offset": 2, | ||
"type": "<DOUBLE>", | ||
"position": 0, | ||
"positionLength": 2 | ||
}, | ||
{ | ||
"token": "京", | ||
"start_offset": 1, | ||
"end_offset": 2, | ||
"type": "<SINGLE>", | ||
"position": 1 | ||
}, | ||
{ | ||
"token": "タワー", | ||
"start_offset": 2, | ||
"end_offset": 5, | ||
"type": "<KATAKANA>", | ||
"position": 2 | ||
}, | ||
{ | ||
"token": "に", | ||
"start_offset": 5, | ||
"end_offset": 6, | ||
"type": "<SINGLE>", | ||
"position": 3 | ||
}, | ||
{ | ||
"token": "に行", | ||
"start_offset": 5, | ||
"end_offset": 7, | ||
"type": "<DOUBLE>", | ||
"position": 3, | ||
"positionLength": 2 | ||
}, | ||
{ | ||
"token": "行", | ||
"start_offset": 6, | ||
"end_offset": 7, | ||
"type": "<SINGLE>", | ||
"position": 4 | ||
}, | ||
{ | ||
"token": "行く", | ||
"start_offset": 6, | ||
"end_offset": 8, | ||
"type": "<DOUBLE>", | ||
"position": 4, | ||
"positionLength": 2 | ||
}, | ||
{ | ||
"token": "く", | ||
"start_offset": 7, | ||
"end_offset": 8, | ||
"type": "<SINGLE>", | ||
"position": 5 | ||
} | ||
] | ||
} | ||
``` | ||
|
||
|
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,96 @@ | ||
--- | ||
layout: default | ||
title: CJK width | ||
parent: Token filters | ||
nav_order: 40 | ||
--- | ||
|
||
# CJK width token filter | ||
|
||
The `cjk_width` token filter normalizes Chinese, Japanese, and Korean (CJK) tokens by converting full-width ASCII characters to their standard (half-width) ASCII equivalents and half-width katakana characters to their full-width equivalents. | ||
|
||
### Converting full-width ASCII characters | ||
|
||
In CJK texts, ASCII characters (such as letters and numbers) can appear in full-width form, occupying the space of two half-width characters. Full-width ASCII characters are typically used in East Asian typography for alignment with the width of CJK characters. However, for the purposes of indexing and searching, these full-width characters need to be normalized to their standard (half-width) ASCII equivalents. | ||
|
||
The following example illustrates ASCII character normalization: | ||
|
||
``` | ||
Full-Width: ABCDE 12345 | ||
Normalized (half-width): ABCDE 12345 | ||
``` | ||
|
||
### Converting half-width katakana characters | ||
|
||
The `cjk_width` token filter converts half-width katakana characters to their full-width counterparts, which are the standard form used in Japanese text. This normalization, illustrated in the following example, is important for consistency in text processing and searching: | ||
|
||
|
||
``` | ||
Half-Width katakana: カタカナ | ||
Normalized (full-width) katakana: カタカナ | ||
``` | ||
|
||
## Example | ||
|
||
The following example request creates a new index named `cjk_width_example_index` and defines an analyzer with the `cjk_width` filter: | ||
|
||
```json | ||
PUT /cjk_width_example_index | ||
{ | ||
"settings": { | ||
"analysis": { | ||
"analyzer": { | ||
"cjk_width_analyzer": { | ||
"type": "custom", | ||
"tokenizer": "standard", | ||
"filter": ["cjk_width"] | ||
} | ||
} | ||
} | ||
} | ||
} | ||
``` | ||
{% include copy-curl.html %} | ||
|
||
## Generated tokens | ||
|
||
Use the following request to examine the tokens generated using the analyzer: | ||
|
||
```json | ||
POST /cjk_width_example_index/_analyze | ||
{ | ||
"analyzer": "cjk_width_analyzer", | ||
"text": "Tokyo 2024 カタカナ" | ||
} | ||
``` | ||
{% include copy-curl.html %} | ||
|
||
The response contains the generated tokens: | ||
|
||
```json | ||
{ | ||
"tokens": [ | ||
{ | ||
"token": "Tokyo", | ||
"start_offset": 0, | ||
"end_offset": 5, | ||
"type": "<ALPHANUM>", | ||
"position": 0 | ||
}, | ||
{ | ||
"token": "2024", | ||
"start_offset": 6, | ||
"end_offset": 10, | ||
"type": "<NUM>", | ||
"position": 1 | ||
}, | ||
{ | ||
"token": "カタカナ", | ||
"start_offset": 11, | ||
"end_offset": 15, | ||
"type": "<KATAKANA>", | ||
"position": 2 | ||
} | ||
] | ||
} | ||
``` |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,81 @@ | ||
--- | ||
layout: default | ||
title: Streaming bulk | ||
parent: Document APIs | ||
nav_order: 25 | ||
redirect_from: | ||
- /opensearch/rest-api/document-apis/bulk/streaming/ | ||
--- | ||
|
||
# Streaming bulk | ||
**Introduced 2.17.0** | ||
{: .label .label-purple } | ||
|
||
This is an experimental feature and is not recommended for use in a production environment. For updates on the progress of the feature or if you want to leave feedback, see the associated [GitHub issue](https://github.com/opensearch-project/OpenSearch/issues/9065). | ||
{: .warning} | ||
|
||
The streaming bulk operation lets you add, update, or delete multiple documents by streaming the request and getting the results as a streaming response. In comparison to the traditional [Bulk API]({{site.url}}{{site.baseurl}}/api-reference/document-apis/bulk/), streaming ingestion eliminates the need to estimate the batch size (which is affected by the cluster operational state at any given time) and naturally applies backpressure between many clients and the cluster. The streaming works over HTTP/2 or HTTP/1.1 (using chunked transfer encoding), depending on the capabilities of the clients and the cluster. | ||
|
||
The default HTTP transport method does not support streaming. You must install the [`transport-reactor-netty4`]({{site.url}}{{site.baseurl}}/install-and-configure/configuring-opensearch/network-settings/#selecting-the-transport) HTTP transport plugin and use it as the default HTTP transport layer. Both the `transport-reactor-netty4` plugin and the Streaming Bulk API are experimental. | ||
{: .note} | ||
|
||
## Path and HTTP methods | ||
|
||
```json | ||
POST _bulk/stream | ||
POST <index>/_bulk/stream | ||
``` | ||
|
||
If you specify the index in the path, then you don't need to include it in the [request body chunks]({{site.url}}{{site.baseurl}}/api-reference/document-apis/bulk/#request-body). | ||
|
||
OpenSearch also accepts PUT requests to the `_bulk/stream` path, but we highly recommend using POST. The accepted usage of PUT---adding or replacing a single resource on a given path---doesn't make sense for streaming bulk requests. | ||
{: .note } | ||
|
||
|
||
## Query parameters | ||
|
||
The following table lists the available query parameters. All query parameters are optional. | ||
|
||
Parameter | Data type | Description | ||
:--- | :--- | :--- | ||
`pipeline` | String | The pipeline ID for preprocessing documents. | ||
`refresh` | Enum | Whether to refresh the affected shards after performing the indexing operations. Default is `false`. `true` causes the changes show up in search results immediately but degrades cluster performance. `wait_for` waits for a refresh. Requests take longer to return, but cluster performance isn't degraded. | ||
`require_alias` | Boolean | Set to `true` to require that all actions target an index alias rather than an index. Default is `false`. | ||
`routing` | String | Routes the request to the specified shard. | ||
`timeout` | Time | How long to wait for the request to return. Default is `1m`. | ||
`type` | String | (Deprecated) The default document type for documents that don't specify a type. Default is `_doc`. We highly recommend ignoring this parameter and using the `_doc` type for all indexes. | ||
`wait_for_active_shards` | String | Specifies the number of active shards that must be available before OpenSearch processes the bulk request. Default is `1` (only the primary shard). Set to `all` or a positive integer. Values greater than 1 require replicas. For example, if you specify a value of 3, the index must have 2 replicas distributed across 2 additional nodes in order for the request to succeed. | ||
`batch_interval` | Time | Specifies for how long bulk operations should be accumulated into a batch before sending the batch to data nodes. | ||
`batch_size` | Time | Specifies how many bulk operations should be accumulated into a batch before sending the batch to data nodes. Default is `1`. | ||
{% comment %}_source | List | asdf | ||
`_source_excludes` | List | asdf | ||
`_source_includes` | List | asdf{% endcomment %} | ||
|
||
## Request body | ||
|
||
The Streaming Bulk API request body is fully compatible with the [Bulk API request body]({{site.url}}{{site.baseurl}}/api-reference/document-apis/bulk/#request-body), where each bulk operation (create/index/update/delete) is sent as a separate chunk. | ||
|
||
## Example request | ||
|
||
```json | ||
curl -X POST "http://localhost:9200/_bulk/stream" -H "Transfer-Encoding: chunked" -H "Content-Type: application/json" -d' | ||
{ "delete": { "_index": "movies", "_id": "tt2229499" } } | ||
{ "index": { "_index": "movies", "_id": "tt1979320" } } | ||
{ "title": "Rush", "year": 2013 } | ||
{ "create": { "_index": "movies", "_id": "tt1392214" } } | ||
{ "title": "Prisoners", "year": 2013 } | ||
{ "update": { "_index": "movies", "_id": "tt0816711" } } | ||
{ "doc" : { "title": "World War Z" } } | ||
' | ||
``` | ||
{% include copy.html %} | ||
|
||
## Example response | ||
|
||
Depending on the batch settings, each streamed response chunk may report the results of one or many (batch) bulk operations. For example, for the preceding request with no batching (default), the streaming response may appear as follows: | ||
|
||
```json | ||
{"took": 11, "errors": false, "items": [ { "index": {"_index": "movies", "_id": "tt1979320", "_version": 1, "result": "created", "_shards": { "total": 2 "successful": 1, "failed": 0 }, "_seq_no": 1, "_primary_term": 1, "status": 201 } } ] } | ||
{"took": 2, "errors": true, "items": [ { "create": { "_index": "movies", "_id": "tt1392214", "status": 409, "error": { "type": "version_conflict_engine_exception", "reason": "[tt1392214]: version conflict, document already exists (current version [1])", "index": "movies", "shard": "0", "index_uuid": "yhizhusbSWmP0G7OJnmcLg" } } } ] } | ||
{"took": 4, "errors": true, "items": [ { "update": { "_index": "movies", "_id": "tt0816711", "status": 404, "error": { "type": "document_missing_exception", "reason": "[_doc][tt0816711]: document missing", "index": "movies", "shard": "0", "index_uuid": "yhizhusbSWmP0G7OJnmcLg" } } } ] } | ||
``` |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Oops, something went wrong.