Skip to content

Commit

Permalink
Merge branch 'main' into fixing-layout-of-enabling-security-plugin
Browse files Browse the repository at this point in the history
  • Loading branch information
vagimeli authored Sep 13, 2024
2 parents 86d77ca + 9f5fe32 commit 375fa13
Show file tree
Hide file tree
Showing 45 changed files with 649 additions and 19 deletions.
160 changes: 160 additions & 0 deletions _analyzers/token-filters/cjk-bigram.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,160 @@
---
layout: default
title: CJK bigram
parent: Token filters
nav_order: 30
---

# CJK bigram token filter

The `cjk_bigram` token filter is designed specifically for processing East Asian languages, such as Chinese, Japanese, and Korean (CJK), which typically don't use spaces to separate words. A bigram is a sequence of two adjacent elements in a string of tokens, which can be characters or words. For CJK languages, bigrams help approximate word boundaries and capture significant character pairs that can convey meaning.


## Parameters

The `cjk_bigram` token filter can be configured with two parameters: `ignore_scripts`and `output_unigrams`.

### `ignore_scripts`

The `cjk-bigram` token filter ignores all non-CJK scripts (writing systems like Latin or Cyrillic) and tokenizes only CJK text into bigrams. Use this option to specify CJK scripts to be ignored. This option takes the following valid values:

- `han`: The `han` script processes Han characters. [Han characters](https://simple.wikipedia.org/wiki/Chinese_characters) are logograms used in the written languages of China, Japan, and Korea. The filter can help with text processing tasks like tokenizing, normalizing, or stemming text written in Chinese, Japanese kanji, or Korean Hanja.

- `hangul`: The `hangul` script processes Hangul characters, which are unique to the Korean language and do not exist in other East Asian scripts.

- `hiragana`: The `hiragana` script processes hiragana, one of the two syllabaries used in the Japanese writing system.
Hiragana is typically used for native Japanese words, grammatical elements, and certain forms of punctuation.

- `katakana`: The `katakana` script processes katakana, the other Japanese syllabary.
Katakana is mainly used for foreign loanwords, onomatopoeia, scientific names, and certain Japanese words.


### `output_unigrams`

This option, when set to `true`, outputs both unigrams (single characters) and bigrams. Default is `false`.

## Example

The following example request creates a new index named `devanagari_example_index` and defines an analyzer with the `cjk_bigram_filter` filter and `ignored_scripts` parameter set to `katakana`:

```json
PUT /cjk_bigram_example
{
"settings": {
"analysis": {
"analyzer": {
"cjk_bigrams_no_katakana": {
"tokenizer": "standard",
"filter": [ "cjk_bigrams_no_katakana_filter" ]
}
},
"filter": {
"cjk_bigrams_no_katakana_filter": {
"type": "cjk_bigram",
"ignored_scripts": [
"katakana"
],
"output_unigrams": true
}
}
}
}
}
```
{% include copy-curl.html %}

## Generated tokens

Use the following request to examine the tokens generated using the analyzer:

```json
POST /cjk_bigram_example/_analyze
{
"analyzer": "cjk_bigrams_no_katakana",
"text": "東京タワーに行く"
}
```
{% include copy-curl.html %}

Sample text: "東京タワーに行く"

東京 (Kanji for "Tokyo")
タワー (Katakana for "Tower")
に行く (Hiragana and Kanji for "go to")

The response contains the generated tokens:

```json
{
"tokens": [
{
"token": "",
"start_offset": 0,
"end_offset": 1,
"type": "<SINGLE>",
"position": 0
},
{
"token": "東京",
"start_offset": 0,
"end_offset": 2,
"type": "<DOUBLE>",
"position": 0,
"positionLength": 2
},
{
"token": "",
"start_offset": 1,
"end_offset": 2,
"type": "<SINGLE>",
"position": 1
},
{
"token": "タワー",
"start_offset": 2,
"end_offset": 5,
"type": "<KATAKANA>",
"position": 2
},
{
"token": "",
"start_offset": 5,
"end_offset": 6,
"type": "<SINGLE>",
"position": 3
},
{
"token": "に行",
"start_offset": 5,
"end_offset": 7,
"type": "<DOUBLE>",
"position": 3,
"positionLength": 2
},
{
"token": "",
"start_offset": 6,
"end_offset": 7,
"type": "<SINGLE>",
"position": 4
},
{
"token": "行く",
"start_offset": 6,
"end_offset": 8,
"type": "<DOUBLE>",
"position": 4,
"positionLength": 2
},
{
"token": "",
"start_offset": 7,
"end_offset": 8,
"type": "<SINGLE>",
"position": 5
}
]
}
```


96 changes: 96 additions & 0 deletions _analyzers/token-filters/cjk-width.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,96 @@
---
layout: default
title: CJK width
parent: Token filters
nav_order: 40
---

# CJK width token filter

The `cjk_width` token filter normalizes Chinese, Japanese, and Korean (CJK) tokens by converting full-width ASCII characters to their standard (half-width) ASCII equivalents and half-width katakana characters to their full-width equivalents.

### Converting full-width ASCII characters

In CJK texts, ASCII characters (such as letters and numbers) can appear in full-width form, occupying the space of two half-width characters. Full-width ASCII characters are typically used in East Asian typography for alignment with the width of CJK characters. However, for the purposes of indexing and searching, these full-width characters need to be normalized to their standard (half-width) ASCII equivalents.

The following example illustrates ASCII character normalization:

```
Full-Width: ABCDE 12345
Normalized (half-width): ABCDE 12345
```

### Converting half-width katakana characters

The `cjk_width` token filter converts half-width katakana characters to their full-width counterparts, which are the standard form used in Japanese text. This normalization, illustrated in the following example, is important for consistency in text processing and searching:


```
Half-Width katakana: カタカナ
Normalized (full-width) katakana: カタカナ
```

## Example

The following example request creates a new index named `cjk_width_example_index` and defines an analyzer with the `cjk_width` filter:

```json
PUT /cjk_width_example_index
{
"settings": {
"analysis": {
"analyzer": {
"cjk_width_analyzer": {
"type": "custom",
"tokenizer": "standard",
"filter": ["cjk_width"]
}
}
}
}
}
```
{% include copy-curl.html %}

## Generated tokens

Use the following request to examine the tokens generated using the analyzer:

```json
POST /cjk_width_example_index/_analyze
{
"analyzer": "cjk_width_analyzer",
"text": "Tokyo 2024 カタカナ"
}
```
{% include copy-curl.html %}

The response contains the generated tokens:

```json
{
"tokens": [
{
"token": "Tokyo",
"start_offset": 0,
"end_offset": 5,
"type": "<ALPHANUM>",
"position": 0
},
{
"token": "2024",
"start_offset": 6,
"end_offset": 10,
"type": "<NUM>",
"position": 1
},
{
"token": "カタカナ",
"start_offset": 11,
"end_offset": 15,
"type": "<KATAKANA>",
"position": 2
}
]
}
```
2 changes: 1 addition & 1 deletion _analyzers/token-filters/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@ Token filter | Underlying Lucene token filter| Description
[`apostrophe`]({{site.url}}{{site.baseurl}}/analyzers/token-filters/apostrophe/) | [ApostropheFilter](https://lucene.apache.org/core/9_10_0/analysis/common/org/apache/lucene/analysis/tr/ApostropheFilter.html) | In each token containing an apostrophe, the `apostrophe` token filter removes the apostrophe itself and all characters following it.
[`asciifolding`]({{site.url}}{{site.baseurl}}/analyzers/token-filters/asciifolding/) | [ASCIIFoldingFilter](https://lucene.apache.org/core/9_10_0/analysis/common/org/apache/lucene/analysis/miscellaneous/ASCIIFoldingFilter.html) | Converts alphabetic, numeric, and symbolic characters.
`cjk_bigram` | [CJKBigramFilter](https://lucene.apache.org/core/9_10_0/analysis/common/org/apache/lucene/analysis/cjk/CJKBigramFilter.html) | Forms bigrams of Chinese, Japanese, and Korean (CJK) tokens.
`cjk_width` | [CJKWidthFilter](https://lucene.apache.org/core/9_10_0/analysis/common/org/apache/lucene/analysis/cjk/CJKWidthFilter.html) | Normalizes Chinese, Japanese, and Korean (CJK) tokens according to the following rules: <br> - Folds full-width ASCII character variants into the equivalent basic Latin characters. <br> - Folds half-width Katakana character variants into the equivalent Kana characters.
[`cjk_width`]({{site.url}}{{site.baseurl}}/analyzers/token-filters/cjk-width/) | [CJKWidthFilter](https://lucene.apache.org/core/9_10_0/analysis/common/org/apache/lucene/analysis/cjk/CJKWidthFilter.html) | Normalizes Chinese, Japanese, and Korean (CJK) tokens according to the following rules: <br> - Folds full-width ASCII character variants into their equivalent basic Latin characters. <br> - Folds half-width katakana character variants into their equivalent kana characters.
`classic` | [ClassicFilter](https://lucene.apache.org/core/9_10_0/analysis/common/org/apache/lucene/analysis/classic/ClassicFilter.html) | Performs optional post-processing on the tokens generated by the classic tokenizer. Removes possessives (`'s`) and removes `.` from acronyms.
`common_grams` | [CommonGramsFilter](https://lucene.apache.org/core/9_10_0/analysis/common/org/apache/lucene/analysis/commongrams/CommonGramsFilter.html) | Generates bigrams for a list of frequently occurring terms. The output contains both single terms and bigrams.
`conditional` | [ConditionalTokenFilter](https://lucene.apache.org/core/9_10_0/analysis/common/org/apache/lucene/analysis/miscellaneous/ConditionalTokenFilter.html) | Applies an ordered list of token filters to tokens that match the conditions provided in a script.
Expand Down
81 changes: 81 additions & 0 deletions _api-reference/document-apis/bulk-streaming.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,81 @@
---
layout: default
title: Streaming bulk
parent: Document APIs
nav_order: 25
redirect_from:
- /opensearch/rest-api/document-apis/bulk/streaming/
---

# Streaming bulk
**Introduced 2.17.0**
{: .label .label-purple }

This is an experimental feature and is not recommended for use in a production environment. For updates on the progress of the feature or if you want to leave feedback, see the associated [GitHub issue](https://github.com/opensearch-project/OpenSearch/issues/9065).
{: .warning}

The streaming bulk operation lets you add, update, or delete multiple documents by streaming the request and getting the results as a streaming response. In comparison to the traditional [Bulk API]({{site.url}}{{site.baseurl}}/api-reference/document-apis/bulk/), streaming ingestion eliminates the need to estimate the batch size (which is affected by the cluster operational state at any given time) and naturally applies backpressure between many clients and the cluster. The streaming works over HTTP/2 or HTTP/1.1 (using chunked transfer encoding), depending on the capabilities of the clients and the cluster.

The default HTTP transport method does not support streaming. You must install the [`transport-reactor-netty4`]({{site.url}}{{site.baseurl}}/install-and-configure/configuring-opensearch/network-settings/#selecting-the-transport) HTTP transport plugin and use it as the default HTTP transport layer. Both the `transport-reactor-netty4` plugin and the Streaming Bulk API are experimental.
{: .note}

## Path and HTTP methods

```json
POST _bulk/stream
POST <index>/_bulk/stream
```

If you specify the index in the path, then you don't need to include it in the [request body chunks]({{site.url}}{{site.baseurl}}/api-reference/document-apis/bulk/#request-body).

OpenSearch also accepts PUT requests to the `_bulk/stream` path, but we highly recommend using POST. The accepted usage of PUT---adding or replacing a single resource on a given path---doesn't make sense for streaming bulk requests.
{: .note }


## Query parameters

The following table lists the available query parameters. All query parameters are optional.

Parameter | Data type | Description
:--- | :--- | :---
`pipeline` | String | The pipeline ID for preprocessing documents.
`refresh` | Enum | Whether to refresh the affected shards after performing the indexing operations. Default is `false`. `true` causes the changes show up in search results immediately but degrades cluster performance. `wait_for` waits for a refresh. Requests take longer to return, but cluster performance isn't degraded.
`require_alias` | Boolean | Set to `true` to require that all actions target an index alias rather than an index. Default is `false`.
`routing` | String | Routes the request to the specified shard.
`timeout` | Time | How long to wait for the request to return. Default is `1m`.
`type` | String | (Deprecated) The default document type for documents that don't specify a type. Default is `_doc`. We highly recommend ignoring this parameter and using the `_doc` type for all indexes.
`wait_for_active_shards` | String | Specifies the number of active shards that must be available before OpenSearch processes the bulk request. Default is `1` (only the primary shard). Set to `all` or a positive integer. Values greater than 1 require replicas. For example, if you specify a value of 3, the index must have 2 replicas distributed across 2 additional nodes in order for the request to succeed.
`batch_interval` | Time | Specifies for how long bulk operations should be accumulated into a batch before sending the batch to data nodes.
`batch_size` | Time | Specifies how many bulk operations should be accumulated into a batch before sending the batch to data nodes. Default is `1`.
{% comment %}_source | List | asdf
`_source_excludes` | List | asdf
`_source_includes` | List | asdf{% endcomment %}

## Request body

The Streaming Bulk API request body is fully compatible with the [Bulk API request body]({{site.url}}{{site.baseurl}}/api-reference/document-apis/bulk/#request-body), where each bulk operation (create/index/update/delete) is sent as a separate chunk.

## Example request

```json
curl -X POST "http://localhost:9200/_bulk/stream" -H "Transfer-Encoding: chunked" -H "Content-Type: application/json" -d'
{ "delete": { "_index": "movies", "_id": "tt2229499" } }
{ "index": { "_index": "movies", "_id": "tt1979320" } }
{ "title": "Rush", "year": 2013 }
{ "create": { "_index": "movies", "_id": "tt1392214" } }
{ "title": "Prisoners", "year": 2013 }
{ "update": { "_index": "movies", "_id": "tt0816711" } }
{ "doc" : { "title": "World War Z" } }
'
```
{% include copy.html %}

## Example response

Depending on the batch settings, each streamed response chunk may report the results of one or many (batch) bulk operations. For example, for the preceding request with no batching (default), the streaming response may appear as follows:

```json
{"took": 11, "errors": false, "items": [ { "index": {"_index": "movies", "_id": "tt1979320", "_version": 1, "result": "created", "_shards": { "total": 2 "successful": 1, "failed": 0 }, "_seq_no": 1, "_primary_term": 1, "status": 201 } } ] }
{"took": 2, "errors": true, "items": [ { "create": { "_index": "movies", "_id": "tt1392214", "status": 409, "error": { "type": "version_conflict_engine_exception", "reason": "[tt1392214]: version conflict, document already exists (current version [1])", "index": "movies", "shard": "0", "index_uuid": "yhizhusbSWmP0G7OJnmcLg" } } } ] }
{"took": 4, "errors": true, "items": [ { "update": { "_index": "movies", "_id": "tt0816711", "status": 404, "error": { "type": "document_missing_exception", "reason": "[_doc][tt0816711]: document missing", "index": "movies", "shard": "0", "index_uuid": "yhizhusbSWmP0G7OJnmcLg" } } } ] }
```
12 changes: 6 additions & 6 deletions _api-reference/document-apis/bulk.md
Original file line number Diff line number Diff line change
Expand Up @@ -53,16 +53,16 @@ All bulk URL parameters are optional.
Parameter | Type | Description
:--- | :--- | :---
pipeline | String | The pipeline ID for preprocessing documents.
refresh | Enum | Whether to refresh the affected shards after performing the indexing operations. Default is `false`. `true` makes the changes show up in search results immediately, but hurts cluster performance. `wait_for` waits for a refresh. Requests take longer to return, but cluster performance doesn't suffer.
refresh | Enum | Whether to refresh the affected shards after performing the indexing operations. Default is `false`. `true` causes the changes show up in search results immediately but degrades cluster performance. `wait_for` waits for a refresh. Requests take longer to return, but cluster performance isn't degraded.
require_alias | Boolean | Set to `true` to require that all actions target an index alias rather than an index. Default is `false`.
routing | String | Routes the request to the specified shard.
timeout | Time | How long to wait for the request to return. Default `1m`.
type | String | (Deprecated) The default document type for documents that don't specify a type. Default is `_doc`. We highly recommend ignoring this parameter and using a type of `_doc` for all indexes.
wait_for_active_shards | String | Specifies the number of active shards that must be available before OpenSearch processes the bulk request. Default is 1 (only the primary shard). Set to `all` or a positive integer. Values greater than 1 require replicas. For example, if you specify a value of 3, the index must have two replicas distributed across two additional nodes for the request to succeed.
timeout | Time | How long to wait for the request to return. Default is `1m`.
type | String | (Deprecated) The default document type for documents that don't specify a type. Default is `_doc`. We highly recommend ignoring this parameter and using the `_doc` type for all indexes.
wait_for_active_shards | String | Specifies the number of active shards that must be available before OpenSearch processes the bulk request. Default is `1` (only the primary shard). Set to `all` or a positive integer. Values greater than 1 require replicas. For example, if you specify a value of 3, the index must have 2 replicas distributed across 2 additional nodes in order for the request to succeed.
batch_size | Integer | **(Deprecated)** Specifies the number of documents to be batched and sent to an ingest pipeline to be processed together. Default is `2147483647` (documents are ingested by an ingest pipeline all at once). If the bulk request doesn't explicitly specify an ingest pipeline or the index doesn't have a default ingest pipeline, then this parameter is ignored. Only documents with `create`, `index`, or `update` actions can be grouped into batches.
{% comment %}_source | List | asdf
_source_excludes | list | asdf
_source_includes | list | asdf{% endcomment %}
_source_excludes | List | asdf
_source_includes | List | asdf{% endcomment %}


## Request body
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@ layout: default
title: documentdb
parent: Sources
grand_parent: Pipelines
nav_order: 2
nav_order: 10
---

# documentdb
Expand Down
2 changes: 1 addition & 1 deletion _data-prepper/pipelines/configuration/sources/dynamo-db.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@ layout: default
title: dynamodb
parent: Sources
grand_parent: Pipelines
nav_order: 3
nav_order: 20
---

# dynamodb
Expand Down
2 changes: 1 addition & 1 deletion _data-prepper/pipelines/configuration/sources/http.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@ layout: default
title: http
parent: Sources
grand_parent: Pipelines
nav_order: 5
nav_order: 30
redirect_from:
- /data-prepper/pipelines/configuration/sources/http-source/
---
Expand Down
Loading

0 comments on commit 375fa13

Please sign in to comment.