From 34b02768305010c9064f4cd387374168193cc3fb Mon Sep 17 00:00:00 2001 From: Naarcha-AWS <97990722+Naarcha-AWS@users.noreply.github.com> Date: Thu, 22 Aug 2024 12:28:27 -0500 Subject: [PATCH 001/111] Add validate API (#7390) * Add validate API. Signed-off-by: Archer * Add detailed responses Signed-off-by: Archer * Add working request and response examples Signed-off-by: Archer * Update validate.md Signed-off-by: Naarcha-AWS <97990722+Naarcha-AWS@users.noreply.github.com> * Apply suggestions from code review Signed-off-by: Naarcha-AWS <97990722+Naarcha-AWS@users.noreply.github.com> * Apply suggestions from code review Co-authored-by: Heather Halter Signed-off-by: Naarcha-AWS <97990722+Naarcha-AWS@users.noreply.github.com> * Apply suggestions from code review Signed-off-by: Naarcha-AWS <97990722+Naarcha-AWS@users.noreply.github.com> * Update validate.md Signed-off-by: Naarcha-AWS <97990722+Naarcha-AWS@users.noreply.github.com> * Apply suggestions from code review Signed-off-by: Naarcha-AWS <97990722+Naarcha-AWS@users.noreply.github.com> * Apply suggestions from code review Co-authored-by: Nathan Bower Signed-off-by: Naarcha-AWS <97990722+Naarcha-AWS@users.noreply.github.com> * Apply suggestions from code review Co-authored-by: Nathan Bower Signed-off-by: Naarcha-AWS <97990722+Naarcha-AWS@users.noreply.github.com> * Update _api-reference/validate.md Signed-off-by: Naarcha-AWS <97990722+Naarcha-AWS@users.noreply.github.com> * Update _api-reference/validate.md Signed-off-by: Naarcha-AWS <97990722+Naarcha-AWS@users.noreply.github.com> * Update _api-reference/validate.md Co-authored-by: Nathan Bower Signed-off-by: Naarcha-AWS <97990722+Naarcha-AWS@users.noreply.github.com> * Update _api-reference/validate.md Co-authored-by: Nathan Bower Signed-off-by: Naarcha-AWS <97990722+Naarcha-AWS@users.noreply.github.com> --------- Signed-off-by: Archer Signed-off-by: Naarcha-AWS <97990722+Naarcha-AWS@users.noreply.github.com> Co-authored-by: Heather Halter Co-authored-by: Nathan Bower --- _api-reference/validate.md | 205 +++++++++++++++++++++++++++++++++++++ 1 file changed, 205 insertions(+) create mode 100644 _api-reference/validate.md diff --git a/_api-reference/validate.md b/_api-reference/validate.md new file mode 100644 index 0000000000..6e1470a505 --- /dev/null +++ b/_api-reference/validate.md @@ -0,0 +1,205 @@ +--- +layout: default +title: Validate Query +nav_order: 87 +--- + +# Validate Query + +You can use the Validate Query API to validate a query without running it. The query can be sent as a path parameter or included in the request body. + +## Path and HTTP methods + +The Validate Query API contains the following path: + +```json +GET /_validate/query +``` + +## Path parameters + +All path parameters are optional. + +Parameter | Data type | Description +:--- | :--- | :--- +`index` | String | The index to validate the query against. If you don't specify an index or multiple indexes as part of the URL (or want to override the URL value for an individual search), you can include it here. Examples include `"logs-*"` and `["my-store", "sample_data_ecommerce"]`. +`query` | Query object | The query using [Query DSL]({{site.url}}{{site.baseurl}}/query-dsl/). + +## Query parameters + +The following table lists the available query parameters. All query parameters are optional. + +Parameter | Data type | Description +:--- | :--- | :--- +`all_shards` | Boolean | When `true`, validation is run against [all shards](#rewrite-and-all_shards) instead of against one shard per index. Default is `false`. +`allow_no_indices` | Boolean | Whether to ignore wildcards that don't match any indexes. Default is `true`. +allow_partial_search_results | Boolean | Whether to return partial results if the request encounters an error or times out. Default is `true`. +`analyzer` | String | The analyzer to use in the query string. This should only be used with the `q` option. +`analyze_wildcard` | Boolean | Specifies whether to analyze wildcard and prefix queries. Default is `false`. +`default_operator` | String | Indicates whether the default operator for a string query should be `AND` or `OR`. Default is `OR`. +`df` | String | The default field if a field prefix is not provided in the query string. +`expand_wildcards` | String | Specifies the type of index that wildcard expressions can match. Supports comma-separated values. Valid values are `all` (match any index), `open` (match open, non-hidden indexes), `closed` (match closed, non-hidden indexes), `hidden` (match hidden indexes), and `none` (deny wildcard expressions). Default is `open`. +`explain` | Boolean | Whether to return information about how OpenSearch computed the [document's score](#explain). Default is `false`. +`ignore_unavailable` | Boolean | Specifies whether to include missing or closed indexes in the response and ignores unavailable shards during the search request. Default is `false`. +`lenient` | Boolean | Specifies whether OpenSearch should ignore format-based query failures (for example, as a result of querying a text field for an integer). Default is `false`. +`rewrite` | Determines how OpenSearch [rewrites](#rewrite) and scores multi-term queries. Valid values are `constant_score`, `scoring_boolean`, `constant_score_boolean`, `top_terms_N`, `top_terms_boost_N`, and `top_terms_blended_freqs_N`. Default is `constant_score`. +`q` | String | A query in the Lucene string syntax. + +## Example request + +The following example request uses an index named `Hamlet` created using a `bulk` request: + +```json +PUT hamlet/_bulk?refresh +{"index":{"_id":1}} +{"user" : { "id": "hamlet" }, "@timestamp" : "2099-11-15T14:12:12", "message" : "To Search or Not To Search"} +{"index":{"_id":2}} +{"user" : { "id": "hamlet" }, "@timestamp" : "2099-11-15T14:12:13", "message" : "My dad says that I'm such a ham."} +``` +{% include copy.html %} + +You can then use the Validate Query API to validate an index query, as shown in the following example: + +```json +GET hamlet/_validate/query?q=user.id:hamlet +``` +{% include copy.html %} + +The query can also be sent as a request body, as shown in the following example: + +```json +GET hamlet/_validate/query +{ + "query" : { + "bool" : { + "must" : { + "query_string" : { + "query" : "*:*" + } + }, + "filter" : { + "term" : { "user.id" : "hamlet" } + } + } + } +} +``` +{% include copy.html %} + + +## Example responses + +If the query passes validation, then the response indicates that the query is `true`, as shown in the following example response, where the `valid` parameter is `true`: + +``` +{ + "_shards": { + "total": 1, + "successful": 1, + "failed": 0 + }, + "valid": true +} +``` + +If the query does not pass validation, then OpenSearch responds that the query is `false`. The following example request query includes a dynamic mapping not configured in the `hamlet` index: + +```json +GET hamlet/_validate/query +{ + "query": { + "query_string": { + "query": "@timestamp:foo", + "lenient": false + } + } +} +``` + +OpenSearch responds with the following, where the `valid` parameter is `false`: + +``` +{ + "_shards": { + "total": 1, + "successful": 1, + "failed": 0 + }, + "valid": false +} +``` + +Certain query parameters can also affect what is included in the response. The following examples show how the [Explain](#explain), [Rewrite](#rewrite), and [all_shards](#rewrite-and-all_shards) query options affect the response. + +### Explain + +The `explain` option returns information about the query failure in the `explanations` field, as shown in the following example response: + +``` +{ + "valid" : false, + "_shards" : { + "total" : 1, + "successful" : 1, + "failed" : 0 + }, + "explanations" : [ { + "index" : "_shakespeare", + "valid" : false, + "error" : "shakespeare/IAEc2nIXSSunQA_suI0MLw] QueryShardException[failed to create query:...failed to parse date field [foo]" + } ] +} +``` + + +### Rewrite + +When the `rewrite` option is set to `true` in the request, the `explanations` option shows the Lucene query that is executed as a string, as shown in the following response: + +``` +{ + "valid": true, + "_shards": { + "total": 1, + "successful": 1, + "failed": 0 + }, + "explanations": [ + { + "index": "", + "valid": true, + "explanation": "((user:hamlet^4.256753 play:hamlet^6.863601 play:romeo^2.8415773 plot:puck^3.4193945 plot:othello^3.8244398 ... )~4) -ConstantScore(_id:2) #(ConstantScore(_type:_doc))^0.0" + } + ] +} +``` + + +### Rewrite and all_shards + +When both the `rewrite` and `all_shards` options are set to `true`, the Validate Query API responds with detailed information from all available shards as opposed to only one shard (the default), as shown in the following response: + +``` +{ + "valid": true, + "_shards": { + "total": 1, + "successful": 1, + "failed": 0 + }, + "explanations": [ + { + "index": "my-index-000001", + "shard": 0, + "valid": true, + "explanation": "(user.id:hamlet)^0.6333333" + } + ] +} +``` + + + + + + From 97fe260b4ae7c1252e391e0c69c30e3613c5f281 Mon Sep 17 00:00:00 2001 From: Naarcha-AWS <97990722+Naarcha-AWS@users.noreply.github.com> Date: Thu, 22 Aug 2024 12:57:59 -0500 Subject: [PATCH 002/111] Update Lucene Max Dimenstion (#8051) Fixes #7886 Signed-off-by: Naarcha-AWS <97990722+Naarcha-AWS@users.noreply.github.com> --- _search-plugins/vector-search.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/_search-plugins/vector-search.md b/_search-plugins/vector-search.md index 68f6dea08c..cd893f4144 100644 --- a/_search-plugins/vector-search.md +++ b/_search-plugins/vector-search.md @@ -97,7 +97,7 @@ In general, select NMSLIB or Faiss for large-scale use cases. Lucene is a good o | | NMSLIB/HNSW | Faiss/HNSW | Faiss/IVF | Lucene/HNSW | |:---|:---|:---|:---|:---| -| Max dimensions | 16,000 | 16,000 | 16,000 | 1,024 | +| Max dimensions | 16,000 | 16,000 | 16,000 | 16,000 | | Filter | Post-filter | Post-filter | Post-filter | Filter during search | | Training required | No | No | Yes | No | | Similarity metrics | `l2`, `innerproduct`, `cosinesimil`, `l1`, `linf` | `l2`, `innerproduct` | `l2`, `innerproduct` | `l2`, `cosinesimil` | From bb5eeeb2b5157c7239faaadb9021c2258d26d87a Mon Sep 17 00:00:00 2001 From: Daniel Widdis Date: Fri, 23 Aug 2024 18:25:38 -0700 Subject: [PATCH 003/111] Remove Split Response Processor from 2.16 Search Pipeline docs (#8078) Signed-off-by: Daniel Widdis --- .../search-pipelines/search-processors.md | 1 - .../search-pipelines/split-processor.md | 236 ------------------ 2 files changed, 237 deletions(-) delete mode 100644 _search-plugins/search-pipelines/split-processor.md diff --git a/_search-plugins/search-pipelines/search-processors.md b/_search-plugins/search-pipelines/search-processors.md index d696859a78..cabca5fde1 100644 --- a/_search-plugins/search-pipelines/search-processors.md +++ b/_search-plugins/search-pipelines/search-processors.md @@ -45,7 +45,6 @@ Processor | Description | Earliest available version [`rerank`]({{site.url}}{{site.baseurl}}/search-plugins/search-pipelines/rerank-processor/)| Reranks search results using a cross-encoder model. | 2.12 [`retrieval_augmented_generation`]({{site.url}}{{site.baseurl}}/search-plugins/search-pipelines/rag-processor/) | Used for retrieval-augmented generation (RAG) in [conversational search]({{site.url}}{{site.baseurl}}/search-plugins/conversational-search/). | 2.10 (generally available in 2.12) [`sort`]({{site.url}}{{site.baseurl}}/search-plugins/search-pipelines/sort-processor/)| Sorts an array of items in either ascending or descending order. | 2.16 -[`split`]({{site.url}}{{site.baseurl}}/search-plugins/search-pipelines/split-processor/)| Splits a string field into an array of substrings based on a specified delimiter. | 2.16 [`truncate_hits`]({{site.url}}{{site.baseurl}}/search-plugins/search-pipelines/truncate-hits-processor/)| Discards search hits after a specified target count is reached. Can undo the effect of the `oversample` request processor. | 2.12 diff --git a/_search-plugins/search-pipelines/split-processor.md b/_search-plugins/search-pipelines/split-processor.md deleted file mode 100644 index 4afe49e6d2..0000000000 --- a/_search-plugins/search-pipelines/split-processor.md +++ /dev/null @@ -1,236 +0,0 @@ ---- -layout: default -title: Split -nav_order: 140 -has_children: false -parent: Search processors -grand_parent: Search pipelines ---- - -# Split processor -Introduced 2.16 -{: .label .label-purple } - -The `split` processor splits a string field into an array of substrings based on a specified delimiter. - -## Request fields - -The following table lists all available request fields. - -Field | Data type | Description -:--- | :--- | :--- -`field` | String | The field containing the string to be split. Required. -`separator` | String | The delimiter used to split the string. Specify either a single separator character or a regular expression pattern. Required. -`preserve_trailing` | Boolean | If set to `true`, preserves empty trailing fields (for example, `''`) in the resulting array. If set to `false`, then empty trailing fields are removed from the resulting array. Default is `false`. -`target_field` | String | The field in which the array of substrings is stored. If not specified, then the field is updated in place. -`tag` | String | The processor's identifier. -`description` | String | A description of the processor. -`ignore_failure` | Boolean | If `true`, then OpenSearch [ignores any failure]({{site.url}}{{site.baseurl}}/search-plugins/search-pipelines/creating-search-pipeline/#ignoring-processor-failures) of this processor and continues to run the remaining processors in the search pipeline. Optional. Default is `false`. - -## Example - -The following example demonstrates using a search pipeline with a `split` processor. - -### Setup - -Create an index named `my_index` and index a document containing the field `message`: - -```json -POST /my_index/_doc/1 -{ - "message": "ingest, search, visualize, and analyze data", - "visibility": "public" -} -``` -{% include copy-curl.html %} - -### Creating a search pipeline - -The following request creates a search pipeline with a `split` response processor that splits the `message` field and stores the results in the `split_message` field: - -```json -PUT /_search/pipeline/my_pipeline -{ - "response_processors": [ - { - "split": { - "field": "message", - "separator": ", ", - "target_field": "split_message" - } - } - ] -} -``` -{% include copy-curl.html %} - -### Using a search pipeline - -Search for documents in `my_index` without a search pipeline: - -```json -GET /my_index/_search -``` -{% include copy-curl.html %} - -The response contains the field `message`: - -
- - Response - - {: .text-delta} -```json -{ - "took": 3, - "timed_out": false, - "_shards": { - "total": 1, - "successful": 1, - "skipped": 0, - "failed": 0 - }, - "hits": { - "total": { - "value": 1, - "relation": "eq" - }, - "max_score": 1, - "hits": [ - { - "_index": "my_index", - "_id": "1", - "_score": 1, - "_source": { - "message": "ingest, search, visualize, and analyze data", - "visibility": "public" - } - } - ] - } -} -``` -
- -To search with a pipeline, specify the pipeline name in the `search_pipeline` query parameter: - -```json -GET /my_index/_search?search_pipeline=my_pipeline -``` -{% include copy-curl.html %} - -The `message` field is split and the results are stored in the `split_message` field: - -
- - Response - - {: .text-delta} - -```json -{ - "took": 6, - "timed_out": false, - "_shards": { - "total": 1, - "successful": 1, - "skipped": 0, - "failed": 0 - }, - "hits": { - "total": { - "value": 1, - "relation": "eq" - }, - "max_score": 1, - "hits": [ - { - "_index": "my_index", - "_id": "1", - "_score": 1, - "_source": { - "visibility": "public", - "message": "ingest, search, visualize, and analyze data", - "split_message": [ - "ingest", - "search", - "visualize", - "and analyze data" - ] - } - } - ] - } -} -``` -
- -You can also use the `fields` option to search for specific fields in a document: - -```json -POST /my_index/_search?pretty&search_pipeline=my_pipeline -{ - "fields": ["visibility", "message"] -} -``` -{% include copy-curl.html %} - -In the response, the `message` field is split and the results are stored in the `split_message` field: - -
- - Response - - {: .text-delta} - -```json -{ - "took": 7, - "timed_out": false, - "_shards": { - "total": 1, - "successful": 1, - "skipped": 0, - "failed": 0 - }, - "hits": { - "total": { - "value": 1, - "relation": "eq" - }, - "max_score": 1, - "hits": [ - { - "_index": "my_index", - "_id": "1", - "_score": 1, - "_source": { - "visibility": "public", - "message": "ingest, search, visualize, and analyze data", - "split_message": [ - "ingest", - "search", - "visualize", - "and analyze data" - ] - }, - "fields": { - "visibility": [ - "public" - ], - "message": [ - "ingest, search, visualize, and analyze data" - ], - "split_message": [ - "ingest", - "search", - "visualize", - "and analyze data" - ] - } - } - ] - } -} -``` -
\ No newline at end of file From c7330d860134241da1c3f78e0e493046cb1e5272 Mon Sep 17 00:00:00 2001 From: Daniel Widdis Date: Sun, 25 Aug 2024 08:16:37 -0700 Subject: [PATCH 004/111] Add Split Response Processor to 2.17 Search Pipeline docs (#8081) Signed-off-by: Daniel Widdis --- .../search-pipelines/search-processors.md | 1 + .../search-pipelines/split-processor.md | 236 ++++++++++++++++++ 2 files changed, 237 insertions(+) create mode 100644 _search-plugins/search-pipelines/split-processor.md diff --git a/_search-plugins/search-pipelines/search-processors.md b/_search-plugins/search-pipelines/search-processors.md index cabca5fde1..83c46ca69d 100644 --- a/_search-plugins/search-pipelines/search-processors.md +++ b/_search-plugins/search-pipelines/search-processors.md @@ -45,6 +45,7 @@ Processor | Description | Earliest available version [`rerank`]({{site.url}}{{site.baseurl}}/search-plugins/search-pipelines/rerank-processor/)| Reranks search results using a cross-encoder model. | 2.12 [`retrieval_augmented_generation`]({{site.url}}{{site.baseurl}}/search-plugins/search-pipelines/rag-processor/) | Used for retrieval-augmented generation (RAG) in [conversational search]({{site.url}}{{site.baseurl}}/search-plugins/conversational-search/). | 2.10 (generally available in 2.12) [`sort`]({{site.url}}{{site.baseurl}}/search-plugins/search-pipelines/sort-processor/)| Sorts an array of items in either ascending or descending order. | 2.16 +[`split`]({{site.url}}{{site.baseurl}}/search-plugins/search-pipelines/split-processor/)| Splits a string field into an array of substrings based on a specified delimiter. | 2.17 [`truncate_hits`]({{site.url}}{{site.baseurl}}/search-plugins/search-pipelines/truncate-hits-processor/)| Discards search hits after a specified target count is reached. Can undo the effect of the `oversample` request processor. | 2.12 diff --git a/_search-plugins/search-pipelines/split-processor.md b/_search-plugins/search-pipelines/split-processor.md new file mode 100644 index 0000000000..c524386262 --- /dev/null +++ b/_search-plugins/search-pipelines/split-processor.md @@ -0,0 +1,236 @@ +--- +layout: default +title: Split +nav_order: 140 +has_children: false +parent: Search processors +grand_parent: Search pipelines +--- + +# Split processor +Introduced 2.17 +{: .label .label-purple } + +The `split` processor splits a string field into an array of substrings based on a specified delimiter. + +## Request fields + +The following table lists all available request fields. + +Field | Data type | Description +:--- | :--- | :--- +`field` | String | The field containing the string to be split. Required. +`separator` | String | The delimiter used to split the string. Specify either a single separator character or a regular expression pattern. Required. +`preserve_trailing` | Boolean | If set to `true`, preserves empty trailing fields (for example, `''`) in the resulting array. If set to `false`, then empty trailing fields are removed from the resulting array. Default is `false`. +`target_field` | String | The field in which the array of substrings is stored. If not specified, then the field is updated in place. +`tag` | String | The processor's identifier. +`description` | String | A description of the processor. +`ignore_failure` | Boolean | If `true`, then OpenSearch [ignores any failure]({{site.url}}{{site.baseurl}}/search-plugins/search-pipelines/creating-search-pipeline/#ignoring-processor-failures) of this processor and continues to run the remaining processors in the search pipeline. Optional. Default is `false`. + +## Example + +The following example demonstrates using a search pipeline with a `split` processor. + +### Setup + +Create an index named `my_index` and index a document containing the field `message`: + +```json +POST /my_index/_doc/1 +{ + "message": "ingest, search, visualize, and analyze data", + "visibility": "public" +} +``` +{% include copy-curl.html %} + +### Creating a search pipeline + +The following request creates a search pipeline with a `split` response processor that splits the `message` field and stores the results in the `split_message` field: + +```json +PUT /_search/pipeline/my_pipeline +{ + "response_processors": [ + { + "split": { + "field": "message", + "separator": ", ", + "target_field": "split_message" + } + } + ] +} +``` +{% include copy-curl.html %} + +### Using a search pipeline + +Search for documents in `my_index` without a search pipeline: + +```json +GET /my_index/_search +``` +{% include copy-curl.html %} + +The response contains the field `message`: + +
+ + Response + + {: .text-delta} +```json +{ + "took": 3, + "timed_out": false, + "_shards": { + "total": 1, + "successful": 1, + "skipped": 0, + "failed": 0 + }, + "hits": { + "total": { + "value": 1, + "relation": "eq" + }, + "max_score": 1, + "hits": [ + { + "_index": "my_index", + "_id": "1", + "_score": 1, + "_source": { + "message": "ingest, search, visualize, and analyze data", + "visibility": "public" + } + } + ] + } +} +``` +
+ +To search with a pipeline, specify the pipeline name in the `search_pipeline` query parameter: + +```json +GET /my_index/_search?search_pipeline=my_pipeline +``` +{% include copy-curl.html %} + +The `message` field is split and the results are stored in the `split_message` field: + +
+ + Response + + {: .text-delta} + +```json +{ + "took": 6, + "timed_out": false, + "_shards": { + "total": 1, + "successful": 1, + "skipped": 0, + "failed": 0 + }, + "hits": { + "total": { + "value": 1, + "relation": "eq" + }, + "max_score": 1, + "hits": [ + { + "_index": "my_index", + "_id": "1", + "_score": 1, + "_source": { + "visibility": "public", + "message": "ingest, search, visualize, and analyze data", + "split_message": [ + "ingest", + "search", + "visualize", + "and analyze data" + ] + } + } + ] + } +} +``` +
+ +You can also use the `fields` option to search for specific fields in a document: + +```json +POST /my_index/_search?pretty&search_pipeline=my_pipeline +{ + "fields": ["visibility", "message"] +} +``` +{% include copy-curl.html %} + +In the response, the `message` field is split and the results are stored in the `split_message` field: + +
+ + Response + + {: .text-delta} + +```json +{ + "took": 7, + "timed_out": false, + "_shards": { + "total": 1, + "successful": 1, + "skipped": 0, + "failed": 0 + }, + "hits": { + "total": { + "value": 1, + "relation": "eq" + }, + "max_score": 1, + "hits": [ + { + "_index": "my_index", + "_id": "1", + "_score": 1, + "_source": { + "visibility": "public", + "message": "ingest, search, visualize, and analyze data", + "split_message": [ + "ingest", + "search", + "visualize", + "and analyze data" + ] + }, + "fields": { + "visibility": [ + "public" + ], + "message": [ + "ingest, search, visualize, and analyze data" + ], + "split_message": [ + "ingest", + "search", + "visualize", + "and analyze data" + ] + } + } + ] + } +} +``` +
\ No newline at end of file From 3365b95623515c52986baf37ed8bd6a6cd521c77 Mon Sep 17 00:00:00 2001 From: AntonEliatra Date: Mon, 26 Aug 2024 14:15:23 +0100 Subject: [PATCH 005/111] Adding inner hits docs (#7677) * Adding documentation for inner_hits #7507 Signed-off-by: AntonEliatra * Adding documentation for inner_hits #7507 Signed-off-by: AntonEliatra * fixing typos in inner_hits Signed-off-by: AntonEliatra * adding documenation for inner hits Signed-off-by: AntonEliatra * adding documenation for inner hits Signed-off-by: AntonEliatra * updating examples as per PR comments Signed-off-by: AntonEliatra * updating as per review comments Signed-off-by: Anton Rubin * Apply suggestions from code review Co-authored-by: kolchfa-aws <105444904+kolchfa-aws@users.noreply.github.com> Signed-off-by: AntonEliatra * Apply suggestions from code review Co-authored-by: kolchfa-aws <105444904+kolchfa-aws@users.noreply.github.com> Signed-off-by: AntonEliatra * Apply suggestions from code review Co-authored-by: kolchfa-aws <105444904+kolchfa-aws@users.noreply.github.com> Signed-off-by: AntonEliatra * Apply suggestions from code review Co-authored-by: kolchfa-aws <105444904+kolchfa-aws@users.noreply.github.com> Signed-off-by: AntonEliatra * Apply suggestions from code review Co-authored-by: kolchfa-aws <105444904+kolchfa-aws@users.noreply.github.com> Signed-off-by: AntonEliatra * updating as per PR comments Signed-off-by: Anton Rubin * Apply suggestions from code review Co-authored-by: Nathan Bower Signed-off-by: AntonEliatra * updating as per PR comments Signed-off-by: Anton Rubin * Apply suggestions from code review Co-authored-by: Nathan Bower Signed-off-by: AntonEliatra --------- Signed-off-by: AntonEliatra Signed-off-by: Anton Rubin Co-authored-by: kolchfa-aws <105444904+kolchfa-aws@users.noreply.github.com> Co-authored-by: Nathan Bower --- _search-plugins/searching-data/index.md | 1 + _search-plugins/searching-data/inner-hits.md | 809 +++++++++++++++++++ 2 files changed, 810 insertions(+) create mode 100644 _search-plugins/searching-data/inner-hits.md diff --git a/_search-plugins/searching-data/index.md b/_search-plugins/searching-data/index.md index dab57d6460..279958d97c 100644 --- a/_search-plugins/searching-data/index.md +++ b/_search-plugins/searching-data/index.md @@ -18,4 +18,5 @@ Feature | Description [Paginate results]({{site.url}}{{site.baseurl}}/opensearch/search/paginate/) | Rather than a single, long list, separate search results into pages. [Sort results]({{site.url}}{{site.baseurl}}/opensearch/search/sort/) | Allow sorting of results by different criteria. [Highlight query matches]({{site.url}}{{site.baseurl}}/opensearch/search/highlight/) | Highlight the search term in the results. +[Retrieve inner hits]({{site.url}}{{site.baseurl}}/search-plugins/searching-data/inner-hits/) | Retrieve underlying hits in nested and parent-join objects. [Retrieve specific fields]({{site.url}}{{site.baseurl}}search-plugins/searching-data/retrieve-specific-fields/) | Retrieve only the specific fields diff --git a/_search-plugins/searching-data/inner-hits.md b/_search-plugins/searching-data/inner-hits.md new file mode 100644 index 0000000000..395e9e748a --- /dev/null +++ b/_search-plugins/searching-data/inner-hits.md @@ -0,0 +1,809 @@ +--- +layout: default +title: Inner hits +parent: Searching data +has_children: false +nav_order: 70 +--- + +# Inner hits + +In OpenSearch, when you perform a search using [nested objects]({{site.url}}{{site.baseurl}}/field-types/supported-field-types/nested/) or [parent-join]({{site.url}}{{site.baseurl}}/field-types/supported-field-types/join/), the underlying hits (nested inner objects or child documents) are hidden by default. You can retrieve inner hits by using the `inner_hits` parameter in the search query. + +You can also use `inner_hits` with the following features: + + - [Highlight query matches]({{site.url}}{{site.baseurl}}/search-plugins/searching-data/highlight/) + - [Explain]({{site.url}}{{site.baseurl}}/api-reference/explain/) + +## Inner hits with nested objects +Nested objects allow you to index an array of objects and maintain their relationship within the same document. The following example request uses the `inner_hits` parameter to retrieve the underlying inner hits. + +1. Create an index mapping with a nested object: + + ```json + PUT /my_index + { + "mappings": { + "properties": { + "user": { + "type": "nested", + "properties": { + "name": { "type": "text" }, + "age": { "type": "integer" } + } + } + } + } + } + ``` + {% include copy-curl.html %} + +2. Index data: + + ```json + POST /my_index/_doc/1 + { + "group": "fans", + "user": [ + { + "name": "John Doe", + "age": 28 + }, + { + "name": "Jane Smith", + "age": 34 + } + ] + } + ``` + {% include copy-curl.html %} + +3. Query with `inner_hits`: + + ```json + GET /my_index/_search + { + "query": { + "nested": { + "path": "user", + "query": { + "bool": { + "must": [ + { "match": { "user.name": "John" } } + ] + } + }, + "inner_hits": {} + } + } + } + ``` + {% include copy-curl.html %} + +The preceding query searches for nested user objects containing the name John and returns the matching nested documents in the `inner_hits` section of the response: + +```json +{ + "hits" : { + "total" : { + "value" : 1, + "relation" : "eq" + }, + "max_score" : 0.6931471, + "hits" : [ + { + "_index" : "my_index", + "_id" : "1", + "_score" : 0.6931471, + "_source" : { + "group" : "fans", + "user" : [ + { + "name" : "John Doe", + "age" : 28 + }, + { + "name" : "Jane Smith", + "age" : 34 + } + ] + }, + "inner_hits" : { + "user" : { + "hits" : { + "total" : { + "value" : 1, + "relation" : "eq" + }, + "max_score" : 0.6931471, + "hits" : [ + { + "_index" : "my_index", + "_id" : "1", + "_nested" : { + "field" : "user", + "offset" : 0 + }, + "_score" : 0.6931471, + "_source" : { + "name" : "John Doe", + "age" : 28 + } + } + ] + } + } + } + } + ] + } +} +``` +## Inner hits with parent-child objects +Parent-join relationships allow you to create relationships between documents of different types within the same index. The following example request searches with `inner_hits` using parent-child objects. + +1. Create an index with a parent-join field: + + ```json + PUT /my_index + { + "mappings": { + "properties": { + "my_join_field": { + "type": "join", + "relations": { + "parent": "child" + } + }, + "text": { + "type": "text" + } + } + } + } + ``` + {% include copy-curl.html %} + +2. Index data: + + ```json + # Index a parent document + PUT /my_index/_doc/1 + { + "text": "This is a parent document", + "my_join_field": "parent" + } + + # Index a child document + PUT /my_index/_doc/2?routing=1 + { + "text": "This is a child document", + "my_join_field": { + "name": "child", + "parent": "1" + } + } + ``` + {% include copy-curl.html %} + +3. Search with `inner_hits`: + + ```json + GET /my_index/_search + { + "query": { + "has_child": { + "type": "child", + "query": { + "match": { + "text": "child" + } + }, + "inner_hits": {} + } + } + } + ``` + {% include copy-curl.html %} + +The preceding query searches for parent documents that have child documents matching the query criteria (in this case, containing the term `"child"`). It returns the matching child documents in the `inner_hits` section of the response: + +```json +{ + "hits" : { + "total" : { + "value" : 1, + "relation" : "eq" + }, + "max_score" : 1.0, + "hits" : [ + { + "_index" : "my_index", + "_id" : "1", + "_score" : 1.0, + "_source" : { + "text" : "This is a parent document", + "my_join_field" : "parent" + }, + "inner_hits" : { + "child" : { + "hits" : { + "total" : { + "value" : 1, + "relation" : "eq" + }, + "max_score" : 0.6931471, + "hits" : [ + { + "_index" : "my_index", + "_id" : "2", + "_score" : 0.6931471, + "_routing" : "1", + "_source" : { + "text" : "This is a child document", + "my_join_field" : { + "name" : "child", + "parent" : "1" + } + } + } + ] + } + } + } + } + ] + } +} +``` + +## Using both parent-join and nested objects with `inner_hits` + +The following example demonstrates using both parent-join and nested objects with `inner_hits`. + +1. Create an index with the following mapping: + + ```json + PUT /my_index + { + "mappings": { + "properties": { + "my_join_field": { + "type": "join", + "relations": { + "parent": "child" + } + }, + "text": { + "type": "text" + }, + "comments": { + "type": "nested", + "properties": { + "user": { "type": "text" }, + "message": { "type": "text" } + } + } + } + } + } + ``` + {% include copy-curl.html %} + +2. Index data: + + ```json + # Index a parent document + PUT /my_index/_doc/1 + { + "text": "This is a parent document", + "my_join_field": "parent" + } + + # Index a child document with nested comments + PUT /my_index/_doc/2?routing=1 + { + "text": "This is a child document", + "my_join_field": { + "name": "child", + "parent": "1" + }, + "comments": [ + { + "user": "John", + "message": "This is a comment" + }, + { + "user": "Jane", + "message": "Another comment" + } + ] + } + ``` + {% include copy-curl.html %} + +3. Query with `inner_hits`: + + ```json + GET /my_index/_search + { + "query": { + "has_child": { + "type": "child", + "query": { + "nested": { + "path": "comments", + "query": { + "bool": { + "must": [ + { "match": { "comments.user": "John" } } + ] + } + }, + "inner_hits": {} + } + }, + "inner_hits": {} + } + } + } + ``` + {% include copy-curl.html %} + +The preceding query searches for parent documents that have child documents containing comments made by John. Specifying `inner_hits` ensures that the matching child documents and their nested comments are returned: + +```json +{ + "hits" : { + "total" : { + "value" : 1, + "relation" : "eq" + }, + "max_score" : 1.0, + "hits" : [ + { + "_index" : "my_index", + "_id" : "1", + "_score" : 1.0, + "_source" : { + "text" : "This is a parent document", + "my_join_field" : "parent" + }, + "inner_hits" : { + "child" : { + "hits" : { + "total" : { + "value" : 1, + "relation" : "eq" + }, + "max_score" : 0.6931471, + "hits" : [ + { + "_index" : "my_index", + "_id" : "2", + "_score" : 0.6931471, + "_routing" : "1", + "_source" : { + "text" : "This is a child document", + "my_join_field" : { + "name" : "child", + "parent" : "1" + }, + "comments" : [ + { + "user" : "John", + "message" : "This is a comment" + }, + { + "user" : "Jane", + "message" : "Another comment" + } + ] + }, + "inner_hits" : { + "comments" : { + "hits" : { + "total" : { + "value" : 1, + "relation" : "eq" + }, + "max_score" : 0.6931471, + "hits" : [ + { + "_index" : "my_index", + "_id" : "2", + "_nested" : { + "field" : "comments", + "offset" : 0 + }, + "_score" : 0.6931471, + "_source" : { + "message" : "This is a comment", + "user" : "John" + } + } + ] + } + } + } + } + ] + } + } + } + } + ] + } +} +``` + + +## inner_hits parameters + +You can pass the following additional parameters to a search with `inner_hits` using both nested objects and parent-join relationships: + +* `from`: The offset from where to start fetching hits in the `inner_hits` results. +* `size`: The maximum number of inner hits to return. +* `sort`: The sorting order for the inner hits. +* `name`: A custom name for the inner hits in the response. This is useful in differentiating between multiple inner hits in a single query. + + +### Example: inner_hits parameters with nested objects + + +1. Create an index with the following mappings: + + ```json + PUT /products + { + "mappings": { + "properties": { + "product_name": { "type": "text" }, + "reviews": { + "type": "nested", + "properties": { + "user": { "type": "text" }, + "comment": { "type": "text" }, + "rating": { "type": "integer" } + } + } + } + } + } + ``` + {% include copy-curl.html %} + +2. Index data: + + ```json + POST /products/_doc/1 + { + "product_name": "Smartphone", + "reviews": [ + { "user": "Alice", "comment": "Great phone", "rating": 5 }, + { "user": "Bob", "comment": "Not bad", "rating": 3 }, + { "user": "Charlie", "comment": "Excellent", "rating": 4 } + ] + } + ``` + {% include copy-curl.html %} + + ```json + POST /products/_doc/2 + { + "product_name": "Laptop", + "reviews": [ + { "user": "Dave", "comment": "Very good", "rating": 5 }, + { "user": "Eve", "comment": "Good value", "rating": 4 } + ] + } + ``` + {% include copy-curl.html %} + +3. Query with `inner_hits` and provide additional parameters: + + ```json + GET /products/_search + { + "query": { + "nested": { + "path": "reviews", + "query": { + "match": { "reviews.comment": "Good" } + }, + "inner_hits": { + "from": 0, + "size": 2, + "sort": [ + { "reviews.rating": { "order": "desc" } } + ], + "name": "top_reviews" + } + } + } + } + ``` + {% include copy-curl.html %} + +The following is the expected result: + +```json +{ + "hits" : { + "total" : { + "value" : 1, + "relation" : "eq" + }, + "max_score" : 0.83740485, + "hits" : [ + { + "_index" : "products", + "_id" : "2", + "_score" : 0.83740485, + "_source" : { + "product_name" : "Laptop", + "reviews" : [ + { + "user" : "Dave", + "comment" : "Very good", + "rating" : 5 + }, + { + "user" : "Eve", + "comment" : "Good value", + "rating" : 4 + } + ] + }, + "inner_hits" : { + "top_reviews" : { + "hits" : { + "total" : { + "value" : 2, + "relation" : "eq" + }, + "max_score" : null, + "hits" : [ + { + "_index" : "products", + "_id" : "2", + "_nested" : { + "field" : "reviews", + "offset" : 0 + }, + "_score" : null, + "_source" : { + "rating" : 5, + "comment" : "Very good", + "user" : "Dave" + }, + "sort" : [ + 5 + ] + }, + { + "_index" : "products", + "_id" : "2", + "_nested" : { + "field" : "reviews", + "offset" : 1 + }, + "_score" : null, + "_source" : { + "rating" : 4, + "comment" : "Good value", + "user" : "Eve" + }, + "sort" : [ + 4 + ] + } + ] + } + } + } + } + ] + } +} +``` + + +### Example: inner_hits parameters with a parent-join relationship + + +1. Create an index with the following mappings: + + ```json + PUT /company + { + "mappings": { + "properties": { + "my_join_field": { + "type": "join", + "relations": { + "employee": "task" + } + }, + "name": { "type": "text" }, + "description": { + "type": "text", + "fields": { + "keyword": { "type": "keyword" } + } + } + } + } + } + ``` + {% include copy-curl.html %} + +2. Index data: + + ```json + # Index a parent document + PUT /company/_doc/1 + { + "name": "Alice", + "my_join_field": "employee" + } + ``` + {% include copy-curl.html %} + + ```json + # Index child documents + PUT /company/_doc/2?routing=1 + { + "description": "Complete the project", + "my_join_field": { + "name": "task", + "parent": "1" + } + } + ``` + {% include copy-curl.html %} + + ```json + PUT /company/_doc/3?routing=1 + { + "description": "Prepare the report", + "my_join_field": { + "name": "task", + "parent": "1" + } + } + ``` + {% include copy-curl.html %} + + ```json + PUT /company/_doc/4?routing=1 + { + "description": "Update project", + "my_join_field": { + "name": "task", + "parent": "1" + } + } + ``` + {% include copy-curl.html %} + +3. Query with `inner_hits` parameters: + + ```json + GET /company/_search + { + "query": { + "has_child": { + "type": "task", + "query": { + "match": { "description": "project" } + }, + "inner_hits": { + "from": 0, + "size": 10, + "sort": [ + { "description.keyword": { "order": "asc" } } + ], + "name": "related_tasks" + } + } + } + } + ``` + {% include copy-curl.html %} + +The following is the expected result: + +```json +{ + "hits" : { + "total" : { + "value" : 1, + "relation" : "eq" + }, + "max_score" : 1.0, + "hits": [ + { + "_index": "company", + "_id": "1", + "_score": 1, + "_source": { + "name": "Alice", + "my_join_field": "employee" + }, + "inner_hits": { + "related_tasks": { + "hits": { + "total": { + "value": 2, + "relation": "eq" + }, + "max_score": null, + "hits": [ + { + "_index": "company", + "_id": "2", + "_score": null, + "_routing": "1", + "_source": { + "description": "Complete the project", + "my_join_field": { + "name": "task", + "parent": "1" + } + }, + "sort": [ + "Complete the project" + ] + }, + { + "_index": "company", + "_id": "4", + "_score": null, + "_routing": "1", + "_source": { + "description": "Update project", + "my_join_field": { + "name": "task", + "parent": "1" + } + }, + "sort": [ + "Update project" + ] + } + ] + } + } + } + } + ] + } +} +``` + +## Benefits of using inner_hits + +* **Detailed query results** + + You can use `inner_hits` to retrieve detailed information about matching nested or child documents directly from the parent document's search results. This is particularly useful for understanding the context and specifics of the match without having to perform additional queries. + + Example use case: In a blog post index, you have comments as nested objects. When searching for blog posts containing specific comments, you can retrieve relevant comments that match the search criteria along with information about the post. + +* **Optimized performance** + + Without `inner_hits`, you may need to run multiple queries to fetch related documents. Using `inner_hits` consolidates these into a single query, reducing the number of round trips to the OpenSearch server and improving overall performance. + + Example use case: In an e-commerce application, you have products as parent documents and reviews as child documents. A single query using `inner_hits` can fetch products and their relevant reviews, avoiding multiple separate queries. + +* **Simplified query logic** + + You can combine parent/child or nested document logic in a single query to simplify the application code and reduce complexity. This helps to ensure that the code is more maintainable and consistent by centralizing the query logic in OpenSearch + + Example use case: In a job portal, you have jobs as parent documents and applications as nested or child documents. You can simplify the application logic by fetching jobs along with specific applications in one query. + +* **Contextual relevance** + + Using `inner_hits` provides contextual relevance by showing exactly which nested or child documents match the query criteria. This is crucial for applications in which the relevance of results depends on a specific part of the document that matches the query. + + Example use case: In a customer support system, you have tickets as parent documents and comments or updates as nested or child documents. You can determine which specific comment matches the search in order to better understand the context of the ticket search. \ No newline at end of file From 15d3c6347f9dbc77037dbc5d2278cba79abfb701 Mon Sep 17 00:00:00 2001 From: Naarcha-AWS <97990722+Naarcha-AWS@users.noreply.github.com> Date: Mon, 26 Aug 2024 14:05:28 -0500 Subject: [PATCH 006/111] Fix broken external links (#8084) Signed-off-by: Archer --- _data-prepper/pipelines/configuration/sinks/s3.md | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/_data-prepper/pipelines/configuration/sinks/s3.md b/_data-prepper/pipelines/configuration/sinks/s3.md index d1413f6ffc..3ff266cccf 100644 --- a/_data-prepper/pipelines/configuration/sinks/s3.md +++ b/_data-prepper/pipelines/configuration/sinks/s3.md @@ -173,14 +173,14 @@ When you provide your own Avro schema, that schema defines the final structure o In cases where your data is uniform, you may be able to automatically generate a schema. Automatically generated schemas are based on the first event that the codec receives. The schema will only contain keys from this event, and all keys must be present in all events in order to automatically generate a working schema. Automatically generated schemas make all fields nullable. Use the `include_keys` and `exclude_keys` sink configurations to control which data is included in the automatically generated schema. -Avro fields should use a null [union](https://avro.apache.org/docs/current/specification/#unions) because this will allow missing values. Otherwise, all required fields must be present for each event. Use non-nullable fields only when you are certain they exist. +Avro fields should use a null [union](https://avro.apache.org/docs/1.10.2/spec.html#Unions) because this will allow missing values. Otherwise, all required fields must be present for each event. Use non-nullable fields only when you are certain they exist. Use the following options to configure the codec. Option | Required | Type | Description :--- | :--- | :--- | :--- -`schema` | Yes | String | The Avro [schema declaration](https://avro.apache.org/docs/current/specification/#schema-declaration). Not required if `auto_schema` is set to true. -`auto_schema` | No | Boolean | When set to `true`, automatically generates the Avro [schema declaration](https://avro.apache.org/docs/current/specification/#schema-declaration) from the first event. +`schema` | Yes | String | The Avro [schema declaration](https://avro.apache.org/docs/1.2.0/spec.html#schemas). Not required if `auto_schema` is set to true. +`auto_schema` | No | Boolean | When set to `true`, automatically generates the Avro [schema declaration](https://avro.apache.org/docs/1.2.0/spec.html#schemas) from the first event. ### `ndjson` codec From af7914128b52a5a24005fe5f219037043f451ad2 Mon Sep 17 00:00:00 2001 From: kolchfa-aws <105444904+kolchfa-aws@users.noreply.github.com> Date: Tue, 27 Aug 2024 10:54:37 -0400 Subject: [PATCH 007/111] Add performance metrics to Vale (#8087) Signed-off-by: Fanit Kolchina --- .github/vale/styles/Vocab/OpenSearch/Words/accept.txt | 1 + 1 file changed, 1 insertion(+) diff --git a/.github/vale/styles/Vocab/OpenSearch/Words/accept.txt b/.github/vale/styles/Vocab/OpenSearch/Words/accept.txt index 9e09f21c3a..aa04726e42 100644 --- a/.github/vale/styles/Vocab/OpenSearch/Words/accept.txt +++ b/.github/vale/styles/Vocab/OpenSearch/Words/accept.txt @@ -80,6 +80,7 @@ Levenshtein [Oo]versamples? [Oo]nboarding pebibyte +p\d{2} [Pp]erformant [Pp]laintext [Pp]luggable From 49d8d23e1e952c20945a1cfb52539c45dc51ec49 Mon Sep 17 00:00:00 2001 From: kolchfa-aws <105444904+kolchfa-aws@users.noreply.github.com> Date: Tue, 27 Aug 2024 17:23:09 -0400 Subject: [PATCH 008/111] Add 1.3.19 to version history (#8090) Signed-off-by: Fanit Kolchina --- _about/version-history.md | 1 + 1 file changed, 1 insertion(+) diff --git a/_about/version-history.md b/_about/version-history.md index d7273ffedb..fd635aff5b 100644 --- a/_about/version-history.md +++ b/_about/version-history.md @@ -31,6 +31,7 @@ OpenSearch version | Release highlights | Release date [2.0.1](https://github.com/opensearch-project/opensearch-build/blob/main/release-notes/opensearch-release-notes-2.0.1.md) | Includes bug fixes and maintenance updates for Alerting and Anomaly Detection. | 16 June 2022 [2.0.0](https://github.com/opensearch-project/opensearch-build/blob/main/release-notes/opensearch-release-notes-2.0.0.md) | Includes document-level monitors for alerting, OpenSearch Notifications plugins, and Geo Map Tiles in OpenSearch Dashboards. Also adds support for Lucene 9 and bug fixes for all OpenSearch plugins. For a full list of release highlights, see the Release Notes. | 26 May 2022 [2.0.0-rc1](https://github.com/opensearch-project/opensearch-build/blob/main/release-notes/opensearch-release-notes-2.0.0-rc1.md) | The Release Candidate for 2.0.0. This version allows you to preview the upcoming 2.0.0 release before the GA release. The preview release adds document-level alerting, support for Lucene 9, and the ability to use term lookup queries in document level security. | 03 May 2022 +[1.3.19](https://github.com/opensearch-project/opensearch-build/blob/main/release-notes/opensearch-release-notes-1.3.19.md) | Includes bug fixes and maintenance updates for OpenSearch security, OpenSearch security Dashboards, and anomaly detection. | 27 August 2024 [1.3.18](https://github.com/opensearch-project/opensearch-build/blob/main/release-notes/opensearch-release-notes-1.3.18.md) | Includes maintenance updates for OpenSearch security. | 16 July 2024 [1.3.17](https://github.com/opensearch-project/opensearch-build/blob/main/release-notes/opensearch-release-notes-1.3.17.md) | Includes maintenance updates for OpenSearch security and OpenSearch Dashboards security. | 06 June 2024 [1.3.16](https://github.com/opensearch-project/opensearch-build/blob/main/release-notes/opensearch-release-notes-1.3.16.md) | Includes bug fixes and maintenance updates for OpenSearch security, index management, performance analyzer, and reporting. | 23 April 2024 From 39cb029c2daf5c147761fed6077c901ff0aba6e1 Mon Sep 17 00:00:00 2001 From: Owais Kazi Date: Wed, 28 Aug 2024 06:06:08 -0700 Subject: [PATCH 009/111] Added documentation for FGAC for Flow Framework (#8076) * Added documentation for FGAC for Flow Framework Signed-off-by: Owais * Addressed PR comments Signed-off-by: Owais * Doc review Signed-off-by: Fanit Kolchina * Code format Signed-off-by: Fanit Kolchina * Add link on index page Signed-off-by: Fanit Kolchina * Add link from api page Signed-off-by: Fanit Kolchina * Change title to flow framework security Signed-off-by: Fanit Kolchina * Apply suggestions from code review Co-authored-by: Nathan Bower Signed-off-by: kolchfa-aws <105444904+kolchfa-aws@users.noreply.github.com> * Changed to workflow template security Signed-off-by: Fanit Kolchina --------- Signed-off-by: Owais Signed-off-by: Fanit Kolchina Signed-off-by: kolchfa-aws <105444904+kolchfa-aws@users.noreply.github.com> Co-authored-by: Fanit Kolchina Co-authored-by: kolchfa-aws <105444904+kolchfa-aws@users.noreply.github.com> Co-authored-by: Nathan Bower --- .../styles/Vocab/OpenSearch/Words/accept.txt | 1 + _automating-configurations/api/index.md | 4 +- _automating-configurations/index.md | 1 + .../workflow-security.md | 93 +++++++++++++++++++ 4 files changed, 98 insertions(+), 1 deletion(-) create mode 100644 _automating-configurations/workflow-security.md diff --git a/.github/vale/styles/Vocab/OpenSearch/Words/accept.txt b/.github/vale/styles/Vocab/OpenSearch/Words/accept.txt index aa04726e42..11ff53efe6 100644 --- a/.github/vale/styles/Vocab/OpenSearch/Words/accept.txt +++ b/.github/vale/styles/Vocab/OpenSearch/Words/accept.txt @@ -127,6 +127,7 @@ stdout [Ss]ubvector [Ss]ubwords? [Ss]uperset +[Ss]uperadmins? [Ss]yslog tebibyte [Tt]emplated diff --git a/_automating-configurations/api/index.md b/_automating-configurations/api/index.md index 716e19c41f..78bc4eaede 100644 --- a/_automating-configurations/api/index.md +++ b/_automating-configurations/api/index.md @@ -18,4 +18,6 @@ OpenSearch supports the following workflow APIs: * [Search workflow]({{site.url}}{{site.baseurl}}/automating-configurations/api/search-workflow/) * [Search workflow state]({{site.url}}{{site.baseurl}}/automating-configurations/api/search-workflow-state/) * [Deprovision workflow]({{site.url}}{{site.baseurl}}/automating-configurations/api/deprovision-workflow/) -* [Delete workflow]({{site.url}}{{site.baseurl}}/automating-configurations/api/delete-workflow/) \ No newline at end of file +* [Delete workflow]({{site.url}}{{site.baseurl}}/automating-configurations/api/delete-workflow/) + +For information about workflow access control, see [Workflow template security]({{site.url}}{{site.baseurl}}/automating-configurations/workflow-security/). \ No newline at end of file diff --git a/_automating-configurations/index.md b/_automating-configurations/index.md index 144ad445c8..68742f6149 100644 --- a/_automating-configurations/index.md +++ b/_automating-configurations/index.md @@ -44,3 +44,4 @@ Workflow automation provides the following benefits: - For the workflow step syntax, see [Workflow steps]({{site.url}}{{site.baseurl}}/automating-configurations/workflow-steps/). - For a complete example, see [Workflow tutorial]({{site.url}}{{site.baseurl}}/automating-configurations/workflow-tutorial/). - For configurable settings, see [Workflow settings]({{site.url}}{{site.baseurl}}/automating-configurations/workflow-settings/). +- For information about workflow access control, see [Workflow template security]({{site.url}}{{site.baseurl}}/automating-configurations/workflow-security/). \ No newline at end of file diff --git a/_automating-configurations/workflow-security.md b/_automating-configurations/workflow-security.md new file mode 100644 index 0000000000..f3a3d7eeb9 --- /dev/null +++ b/_automating-configurations/workflow-security.md @@ -0,0 +1,93 @@ +--- +layout: default +title: Workflow template security +nav_order: 50 +--- + +# Workflow template security + +In OpenSearch, automated workflow configurations are provided by the Flow Framework plugin. You can use the Security plugin together with the Flow Framework plugin to limit non-admin users to specific actions. For example, you might want some users to only be able to create, update, or delete workflows, while others may only be able to view workflows. + +All Flow Framework indexes are protected as system indexes. Only a superadmin user or an admin user with a TLS certificate can access system indexes. For more information, see [System indexes]({{site.url}}{{site.baseurl}}/security/configuration/system-indices/). + +Security for Flow Framework is set up similarly to [security for anomaly detection]({{site.url}}{{site.baseurl}}/monitoring-plugins/ad/security/). + +## Basic permissions + +As an admin user, you can use the Security plugin to assign specific permissions to users based on the APIs they need to access. For a list of supported Flow Framework APIs, see [Workflow APIs]({{site.url}}{{site.baseurl}}/automating-configurations/api/index/). + +The Security plugin has two built-in roles that cover most Flow Framework use cases: `flow_framework_full_access` and `flow_framework_read_access`. For descriptions of each, see [Predefined roles]({{site.url}}{{site.baseurl}}/security/access-control/users-roles#predefined-roles). + +If these roles don't meet your needs, you can assign users individual Flow Framework [permissions]({{site.url}}{{site.baseurl}}/security/access-control/permissions/) to suit your use case. Each action corresponds to an operation in the REST API. For example, the `cluster:admin/opensearch/flow_framework/workflow/search` permission lets you search workflows. + +### Fine-grained access control + +To reduce the chances of unintended users viewing metadata that describes an index, we recommend that administrators enable role-based access control when assigning permissions to the intended user group. For more information, see [Limit access by backend role](#advanced-limit-access-by-backend-role). + +## (Advanced) Limit access by backend role + +Use backend roles to configure fine-grained access to individual workflows based on roles. For example, users in different departments of an organization can view workflows owned by their own department. + +First, make sure your users have the appropriate [backend roles]({{site.url}}{{site.baseurl}}/security/access-control/index/). Backend roles usually come from an [LDAP server]({{site.url}}{{site.baseurl}}/security/configuration/ldap/) or [SAML provider]({{site.url}}{{site.baseurl}}/security/configuration/saml/), but if you use an internal user database, you can [create users manually using the API]({{site.url}}{{site.baseurl}}/security/access-control/api#create-user). + +Next, enable the following setting: + +```json +PUT _cluster/settings +{ + "transient": { + "plugins.flow_framework.filter_by_backend_roles": "true" + } +} +``` +{% include copy-curl.html %} + +Now when users view workflow resources in OpenSearch Dashboards (or make REST API calls), they only see workflows created by users who share at least one backend role. + +For example, consider two users: `alice` and `bob`. + +`alice` has an `analyst` backend role: + +```json +PUT _plugins/_security/api/internalusers/alice +{ + "password": "alice", + "backend_roles": [ + "analyst" + ], + "attributes": {} +} +``` + +`bob` has a `human-resources` backend role: + +```json +PUT _plugins/_security/api/internalusers/bob +{ + "password": "bob", + "backend_roles": [ + "human-resources" + ], + "attributes": {} +} +``` + +Both `alice` and `bob` have full access to the Flow Framework APIs: + +```json +PUT _plugins/_security/api/rolesmapping/flow_framework_full_access +{ + "backend_roles": [], + "hosts": [], + "users": [ + "alice", + "bob" + ] +} +``` + +Because they have different backend roles, `alice` and `bob` cannot view each other's workflows or their results. + +Users without backend roles can still view other users' workflow results if they have `flow_framework_read_access`. This also applies to users who have `flow_framework_full_access` because this permission includes all of the permissions of `flow_framework_read_access`. + +Administrators should inform users that the `flow_framework_read_access` permission allows them to view the results of any workflow in a cluster, including data not directly accessible to them. To limit access to the results of a specific workflow, administrators should apply backend role filters when creating the workflow. This ensures that only users with matching backend roles can access that workflow's results. \ No newline at end of file From 19eacb854e65b79d6c2e4d3a325419f36f8f0c51 Mon Sep 17 00:00:00 2001 From: kolchfa-aws <105444904+kolchfa-aws@users.noreply.github.com> Date: Wed, 28 Aug 2024 09:18:27 -0400 Subject: [PATCH 010/111] Add joining queries index page (#8088) * Add joining queries index page Signed-off-by: Fanit Kolchina * Apply suggestions from code review Co-authored-by: Melissa Vagi Signed-off-by: kolchfa-aws <105444904+kolchfa-aws@users.noreply.github.com> * Apply suggestions from code review Co-authored-by: Nathan Bower Signed-off-by: kolchfa-aws <105444904+kolchfa-aws@users.noreply.github.com> --------- Signed-off-by: Fanit Kolchina Signed-off-by: kolchfa-aws <105444904+kolchfa-aws@users.noreply.github.com> Co-authored-by: Melissa Vagi Co-authored-by: Nathan Bower --- _query-dsl/joining/index.md | 18 ++++++++++++++++++ _query-dsl/term/fuzzy.md | 2 +- _query-dsl/term/prefix.md | 2 +- _query-dsl/term/range.md | 2 +- _query-dsl/term/regexp.md | 2 +- _query-dsl/term/wildcard.md | 2 +- 6 files changed, 23 insertions(+), 5 deletions(-) create mode 100644 _query-dsl/joining/index.md diff --git a/_query-dsl/joining/index.md b/_query-dsl/joining/index.md new file mode 100644 index 0000000000..20f48c0b16 --- /dev/null +++ b/_query-dsl/joining/index.md @@ -0,0 +1,18 @@ +--- +layout: default +title: Joining queries +has_children: true +nav_order: 55 +--- + +# Joining queries + +OpenSearch is a distributed system in which data is spread across multiple nodes. Thus, running a SQL-like JOIN operation in OpenSearch is resource intensive. As an alternative, OpenSearch provides the following queries that perform join operations and are optimized for scaling across multiple nodes: + +- `nested` queries: Act as wrappers for other queries to search [nested]({{site.url}}{{site.baseurl}}/field-types/supported-field-types/nested/) fields. The nested field objects are searched as though they were indexed as separate documents. +- `has_child` queries: Search for parent documents whose child documents match the query. +- `has_parent` queries: Search for child documents whose parent documents match the query. +- `parent_id` queries: A [join]({{site.url}}{{site.baseurl}}/field-types/supported-field-types/nested/) field type establishes a parent/child relationship between documents in the same index. `parent_id` queries search for child documents that are joined to a specific parent document. + +If [`search.allow_expensive_queries`]({{site.url}}{{site.baseurl}}/query-dsl/index/#expensive-queries) is set to `false`, then joining queries are not executed. +{: .important} \ No newline at end of file diff --git a/_query-dsl/term/fuzzy.md b/_query-dsl/term/fuzzy.md index 7a426fd794..0e448bbbda 100644 --- a/_query-dsl/term/fuzzy.md +++ b/_query-dsl/term/fuzzy.md @@ -89,5 +89,5 @@ Parameter | Data type | Description Specifying a large value in `max_expansions` can lead to poor performance, especially if `prefix_length` is set to `0`, because of the large number of variations of the word that OpenSearch tries to match. {: .warning} -If [`search.allow_expensive_queries`]({{site.url}}{{site.baseurl}}/query-dsl/index/#expensive-queries) is set to `false`, fuzzy queries are not run. +If [`search.allow_expensive_queries`]({{site.url}}{{site.baseurl}}/query-dsl/index/#expensive-queries) is set to `false`, then fuzzy queries are not executed. {: .important} diff --git a/_query-dsl/term/prefix.md b/_query-dsl/term/prefix.md index 2a429c9f0e..087c26cc30 100644 --- a/_query-dsl/term/prefix.md +++ b/_query-dsl/term/prefix.md @@ -66,5 +66,5 @@ Parameter | Data type | Description `case_insensitive` | Boolean | If `true`, allows case-insensitive matching of the value with the indexed field values. Default is `false` (case sensitivity is determined by the field's mapping). `rewrite` | String | Determines how OpenSearch rewrites and scores multi-term queries. Valid values are `constant_score`, `scoring_boolean`, `constant_score_boolean`, `top_terms_N`, `top_terms_boost_N`, and `top_terms_blended_freqs_N`. Default is `constant_score`. -If [`search.allow_expensive_queries`]({{site.url}}{{site.baseurl}}/query-dsl/index/#expensive-queries) is set to `false`, prefix queries are not run. If `index_prefixes` is enabled, the `search.allow_expensive_queries` setting is ignored and an optimized query is built and run. +If [`search.allow_expensive_queries`]({{site.url}}{{site.baseurl}}/query-dsl/index/#expensive-queries) is set to `false`, then prefix queries are not executed. If `index_prefixes` is enabled, then the `search.allow_expensive_queries` setting is ignored and an optimized query is built and run. {: .important} diff --git a/_query-dsl/term/range.md b/_query-dsl/term/range.md index ceb264db76..1e6fece218 100644 --- a/_query-dsl/term/range.md +++ b/_query-dsl/term/range.md @@ -217,5 +217,5 @@ Parameter | Data type | Description `boost` | Floating-point | A floating-point value that specifies the weight of this field toward the relevance score. Values above 1.0 increase the field’s relevance. Values between 0.0 and 1.0 decrease the field’s relevance. Default is 1.0. `time_zone` | String | The time zone used to convert [`date`]({{site.url}}{{site.baseurl}}/opensearch/supported-field-types/date/) values to UTC in the query. Valid values are ISO 8601 [UTC offsets](https://en.wikipedia.org/wiki/List_of_UTC_offsets) and [IANA time zone IDs](https://en.wikipedia.org/wiki/List_of_tz_database_time_zones). For more information, see [Time zone](#time-zone). -If [`search.allow_expensive_queries`]({{site.url}}{{site.baseurl}}/query-dsl/index/#expensive-queries) is set to `false`, range queries on [`text`]({{site.url}}{{site.baseurl}}/opensearch/supported-field-types/text/) and [`keyword`]({{site.url}}{{site.baseurl}}/opensearch/supported-field-types/keyword/) fields are not run. +If [`search.allow_expensive_queries`]({{site.url}}{{site.baseurl}}/query-dsl/index/#expensive-queries) is set to `false`, then range queries on [`text`]({{site.url}}{{site.baseurl}}/opensearch/supported-field-types/text/) and [`keyword`]({{site.url}}{{site.baseurl}}/opensearch/supported-field-types/keyword/) fields are not executed. {: .important} diff --git a/_query-dsl/term/regexp.md b/_query-dsl/term/regexp.md index 4a038729c0..34a0c916ce 100644 --- a/_query-dsl/term/regexp.md +++ b/_query-dsl/term/regexp.md @@ -61,5 +61,5 @@ Parameter | Data type | Description `max_determinized_states` | Integer | Lucene converts a regular expression to an automaton with a number of determinized states. This parameter specifies the maximum number of automaton states the query requires. Use this parameter to prevent high resource consumption. To run complex regular expressions, you may need to increase the value of this parameter. Default is 10,000. `rewrite` | String | Determines how OpenSearch rewrites and scores multi-term queries. Valid values are `constant_score`, `scoring_boolean`, `constant_score_boolean`, `top_terms_N`, `top_terms_boost_N`, and `top_terms_blended_freqs_N`. Default is `constant_score`. -If [`search.allow_expensive_queries`]({{site.url}}{{site.baseurl}}/query-dsl/index/#expensive-queries) is set to `false`, `regexp` queries are not run. +If [`search.allow_expensive_queries`]({{site.url}}{{site.baseurl}}/query-dsl/index/#expensive-queries) is set to `false`, then `regexp` queries are not executed. {: .important} diff --git a/_query-dsl/term/wildcard.md b/_query-dsl/term/wildcard.md index b2d7238758..c6e0499517 100644 --- a/_query-dsl/term/wildcard.md +++ b/_query-dsl/term/wildcard.md @@ -64,5 +64,5 @@ Parameter | Data type | Description `case_insensitive` | Boolean | If `true`, allows case-insensitive matching of the value with the indexed field values. Default is `false` (case sensitivity is determined by the field's mapping). `rewrite` | String | Determines how OpenSearch rewrites and scores multi-term queries. Valid values are `constant_score`, `scoring_boolean`, `constant_score_boolean`, `top_terms_N`, `top_terms_boost_N`, and `top_terms_blended_freqs_N`. Default is `constant_score`. -If [`search.allow_expensive_queries`]({{site.url}}{{site.baseurl}}/query-dsl/index/#expensive-queries) is set to `false`, wildcard queries are not run. +If [`search.allow_expensive_queries`]({{site.url}}{{site.baseurl}}/query-dsl/index/#expensive-queries) is set to `false`, then wildcard queries are not executed. {: .important} From ba56c07bb1c498416faddfd024fe40cf9d366ae4 Mon Sep 17 00:00:00 2001 From: kolchfa-aws <105444904+kolchfa-aws@users.noreply.github.com> Date: Wed, 28 Aug 2024 15:37:17 -0400 Subject: [PATCH 011/111] Refactor OpenSearch/Dashboards front page (#8114) * Refactor OpenSearch/Dashboards front page Signed-off-by: Fanit Kolchina * Editorial comments Signed-off-by: Fanit Kolchina --------- Signed-off-by: Fanit Kolchina --- _about/index.md | 52 +++++++++++++++++++++---------------- _getting-started/intro.md | 1 + _ml-commons-plugin/index.md | 4 +++ _search-plugins/index.md | 26 ++++++++++--------- 4 files changed, 49 insertions(+), 34 deletions(-) diff --git a/_about/index.md b/_about/index.md index d2cc011b55..041197eeba 100644 --- a/_about/index.md +++ b/_about/index.md @@ -22,16 +22,21 @@ This section contains documentation for OpenSearch and OpenSearch Dashboards. ## Getting started -- [Intro to OpenSearch]({{site.url}}{{site.baseurl}}/intro/) -- [Quickstart]({{site.url}}{{site.baseurl}}/quickstart/) +To get started, explore the following documentation: + +- [Getting started guide]({{site.url}}{{site.baseurl}}/getting-started/): + - [Intro to OpenSearch]({{site.url}}{{site.baseurl}}/getting-started/intro/) + - [Installation quickstart]({{site.url}}{{site.baseurl}}/getting-started/quickstart/) + - [Communicate with OpenSearch]({{site.url}}{{site.baseurl}}/getting-started/communicate/) + - [Ingest data]({{site.url}}{{site.baseurl}}/getting-started/ingest-data/) + - [Search data]({{site.url}}{{site.baseurl}}/getting-started/search-data/) + - [Getting started with OpenSearch security]({{site.url}}{{site.baseurl}}/getting-started/security/) - [Install OpenSearch]({{site.url}}{{site.baseurl}}/install-and-configure/install-opensearch/index/) - [Install OpenSearch Dashboards]({{site.url}}{{site.baseurl}}/install-and-configure/install-dashboards/index/) -- [See the FAQ](https://opensearch.org/faq) +- [FAQ](https://opensearch.org/faq) ## Why use OpenSearch? -With OpenSearch, you can perform the following use cases: - @@ -41,35 +46,38 @@ With OpenSearch, you can perform the following use cases: - - - - + + + + - + - +
Operational health tracking
Fast, Scalable Full-text SearchApplication and Infrastructure MonitoringSecurity and Event Information ManagementOperational Health TrackingFast, scalable full-text searchApplication and infrastructure monitoringSecurity and event information managementOperational health tracking
Help users find the right information within your application, website, or data lake catalog. Easily store and analyze log data, and set automated alerts for underperformance.Easily store and analyze log data, and set automated alerts for performance issues. Centralize logs to enable real-time security monitoring and forensic analysis.Use observability logs, metrics, and traces to monitor your applications and business in real time.Use observability logs, metrics, and traces to monitor your applications in real time.
-**Additional features and plugins:** +## Key features + +OpenSearch provides several features to help index, secure, monitor, and analyze your data: -OpenSearch has several features and plugins to help index, secure, monitor, and analyze your data. Most OpenSearch plugins have corresponding OpenSearch Dashboards plugins that provide a convenient, unified user interface. -- [Anomaly detection]({{site.url}}{{site.baseurl}}/monitoring-plugins/ad/) - Identify atypical data and receive automatic notifications -- [KNN]({{site.url}}{{site.baseurl}}/search-plugins/knn/) - Find “nearest neighbors” in your vector data -- [Performance Analyzer]({{site.url}}{{site.baseurl}}/monitoring-plugins/pa/) - Monitor and optimize your cluster -- [SQL]({{site.url}}{{site.baseurl}}/search-plugins/sql/index/) - Use SQL or a piped processing language to query your data -- [Index State Management]({{site.url}}{{site.baseurl}}/im-plugin/) - Automate index operations -- [ML Commons plugin]({{site.url}}{{site.baseurl}}/ml-commons-plugin/index/) - Train and execute machine-learning models -- [Asynchronous search]({{site.url}}{{site.baseurl}}/search-plugins/async/) - Run search requests in the background -- [Cross-cluster replication]({{site.url}}{{site.baseurl}}/replication-plugin/index/) - Replicate your data across multiple OpenSearch clusters +- [Anomaly detection]({{site.url}}{{site.baseurl}}/monitoring-plugins/ad/) -- Identify atypical data and receive automatic notifications. +- [SQL]({{site.url}}{{site.baseurl}}/search-plugins/sql/index/) -- Use SQL or a Piped Processing Language (PPL) to query your data. +- [Index State Management]({{site.url}}{{site.baseurl}}/im-plugin/) -- Automate index operations. +- [Search methods]({{site.url}}{{site.baseurl}}/search-plugins/knn/) -- From traditional lexical search to advanced vector and hybrid search, discover the optimal search method for your use case. +- [Machine learning]({{site.url}}{{site.baseurl}}/ml-commons-plugin/index/) -- Integrate machine learning models into your workloads. +- [Workflow automation]({{site.url}}{{site.baseurl}}/automating-configurations/index/) -- Automate complex OpenSearch setup and preprocessing tasks. +- [Performance evaluation]({{site.url}}{{site.baseurl}}/monitoring-plugins/pa/) -- Monitor and optimize your cluster. +- [Asynchronous search]({{site.url}}{{site.baseurl}}/search-plugins/async/) -- Run search requests in the background. +- [Cross-cluster replication]({{site.url}}{{site.baseurl}}/replication-plugin/index/) -- Replicate your data across multiple OpenSearch clusters. ## The secure path forward -OpenSearch includes a demo configuration so that you can get up and running quickly, but before using OpenSearch in a production environment, you must [configure the Security plugin manually]({{site.url}}{{site.baseurl}}/security/configuration/index/) with your own certificates, authentication method, users, and passwords. + +OpenSearch includes a demo configuration so that you can get up and running quickly, but before using OpenSearch in a production environment, you must [configure the Security plugin manually]({{site.url}}{{site.baseurl}}/security/configuration/index/) with your own certificates, authentication method, users, and passwords. To get started, see [Getting started with OpenSearch security]({{site.url}}{{site.baseurl}}/getting-started/security/). ## Looking for the Javadoc? diff --git a/_getting-started/intro.md b/_getting-started/intro.md index edd178a23f..f5eb24ba2b 100644 --- a/_getting-started/intro.md +++ b/_getting-started/intro.md @@ -56,6 +56,7 @@ ID | Name | GPA | Graduation year 1 | John Doe | 3.89 | 2022 2 | Jonathan Powers | 3.85 | 2025 3 | Jane Doe | 3.52 | 2024 +... | | | ## Clusters and nodes diff --git a/_ml-commons-plugin/index.md b/_ml-commons-plugin/index.md index f0355b6be3..50d637379e 100644 --- a/_ml-commons-plugin/index.md +++ b/_ml-commons-plugin/index.md @@ -32,6 +32,10 @@ ML Commons supports various algorithms to help train ML models and make predicti ML Commons provides its own set of REST APIs. For more information, see [ML Commons API]({{site.url}}{{site.baseurl}}/ml-commons-plugin/api/index/). +## ML-powered search + +For information about available ML-powered search types, see [ML-powered search]({{site.url}}{{site.baseurl}}/search-plugins/index/#ml-powered-search). + ## Tutorials Using the OpenSearch ML framework, you can build various applications, from implementing conversational search to building your own chatbot. For more information, see [Tutorials]({{site.url}}{{site.baseurl}}/ml-commons-plugin/tutorials/index/). \ No newline at end of file diff --git a/_search-plugins/index.md b/_search-plugins/index.md index 79e0e715d0..3604245f11 100644 --- a/_search-plugins/index.md +++ b/_search-plugins/index.md @@ -16,29 +16,31 @@ OpenSearch provides many features for customizing your search use cases and impr ## Search methods -OpenSearch supports the following search methods: +OpenSearch supports the following search methods. -- **Traditional lexical search** +### Traditional lexical search - - [Keyword (BM25) search]({{site.url}}{{site.baseurl}}/search-plugins/keyword-search/): Searches the document corpus for words that appear in the query. +OpenSearch supports [keyword (BM25) search]({{site.url}}{{site.baseurl}}/search-plugins/keyword-search/), which searches the document corpus for words that appear in the query. -- **Machine learning (ML)-powered search** +### ML-powered search - - **Vector search** +OpenSearch supports the following machine learning (ML)-powered search methods: - - [k-NN search]({{site.url}}{{site.baseurl}}/search-plugins/knn/): Searches for k-nearest neighbors to a search term across an index of vectors. +- **Vector search** - - **Neural search**: [Neural search]({{site.url}}{{site.baseurl}}/search-plugins/neural-search/) facilitates generating vector embeddings at ingestion time and searching them at search time. Neural search lets you integrate ML models into your search and serves as a framework for implementing other search methods. The following search methods are built on top of neural search: + - [k-NN search]({{site.url}}{{site.baseurl}}/search-plugins/knn/): Searches for the k-nearest neighbors to a search term across an index of vectors. - - [Semantic search]({{site.url}}{{site.baseurl}}/search-plugins/semantic-search/): Considers the meaning of the words in the search context. Uses dense retrieval based on text embedding models to search text data. +- **Neural search**: [Neural search]({{site.url}}{{site.baseurl}}/search-plugins/neural-search/) facilitates generating vector embeddings at ingestion time and searching them at search time. Neural search lets you integrate ML models into your search and serves as a framework for implementing other search methods. The following search methods are built on top of neural search: - - [Multimodal search]({{site.url}}{{site.baseurl}}/search-plugins/multimodal-search/): Uses multimodal embedding models to search text and image data. + - [Semantic search]({{site.url}}{{site.baseurl}}/search-plugins/semantic-search/): Considers the meaning of the words in the search context. Uses dense retrieval based on text embedding models to search text data. - - [Neural sparse search]({{site.url}}{{site.baseurl}}/search-plugins/neural-sparse-search/): Uses sparse retrieval based on sparse embedding models to search text data. + - [Multimodal search]({{site.url}}{{site.baseurl}}/search-plugins/multimodal-search/): Uses multimodal embedding models to search text and image data. - - [Hybrid search]({{site.url}}{{site.baseurl}}/search-plugins/hybrid-search/): Combines traditional search and vector search to improve search relevance. + - [Neural sparse search]({{site.url}}{{site.baseurl}}/search-plugins/neural-sparse-search/): Uses sparse retrieval based on sparse embedding models to search text data. - - [Conversational search]({{site.url}}{{site.baseurl}}/search-plugins/conversational-search/): Implements a retrieval-augmented generative search. + - [Hybrid search]({{site.url}}{{site.baseurl}}/search-plugins/hybrid-search/): Combines traditional search and vector search to improve search relevance. + + - [Conversational search]({{site.url}}{{site.baseurl}}/search-plugins/conversational-search/): Implements a retrieval-augmented generative search. ## Query languages From fd709a1301ac24d5dd45a704aa2cd798f39aea86 Mon Sep 17 00:00:00 2001 From: AntonEliatra Date: Wed, 28 Aug 2024 23:01:41 +0100 Subject: [PATCH 012/111] Updating data sources docs # 7680 (#7800) * Updating data sources docs # 7680 Signed-off-by: AntonEliatra * Update connect-prometheus.md Signed-off-by: AntonEliatra * Update data-sources.md Signed-off-by: AntonEliatra * removing the screenshot from data sources Signed-off-by: AntonEliatra * Revise text for accuracy and readability Signed-off-by: Melissa Vagi * Copy edits to headings Signed-off-by: Melissa Vagi * Revise lines 39-42 Signed-off-by: Melissa Vagi * Update connect-prometheus.md Copy edits Signed-off-by: Melissa Vagi * Update _dashboards/management/connect-prometheus.md Signed-off-by: Melissa Vagi * Update _dashboards/management/connect-prometheus.md Signed-off-by: Melissa Vagi * Update _dashboards/management/connect-prometheus.md Signed-off-by: Melissa Vagi * Update connect-prometheus.md revised text for conciseness and clarity Signed-off-by: Melissa Vagi * Update _dashboards/management/data-sources.md Co-authored-by: Nathan Bower Signed-off-by: Melissa Vagi * Update _dashboards/management/data-sources.md Signed-off-by: Melissa Vagi * Update data-sources.md address editorial feedback Signed-off-by: Melissa Vagi * Update _dashboards/management/connect-prometheus.md Signed-off-by: Melissa Vagi * Update _dashboards/management/connect-prometheus.md Signed-off-by: Melissa Vagi * Update _dashboards/management/connect-prometheus.md Signed-off-by: Melissa Vagi * Update _dashboards/management/connect-prometheus.md Signed-off-by: Melissa Vagi * Update _dashboards/management/connect-prometheus.md Signed-off-by: Melissa Vagi * Update data-sources.md add prometheus in next steps Signed-off-by: Melissa Vagi * Address editorial feedback Signed-off-by: Melissa Vagi * Update connect-prometheus.md Signed-off-by: Melissa Vagi * Update connect-prometheus.md Signed-off-by: Melissa Vagi * Update data-sources.md Signed-off-by: Melissa Vagi * Update _dashboards/management/connect-prometheus.md Signed-off-by: Melissa Vagi * Update _dashboards/management/connect-prometheus.md Signed-off-by: Melissa Vagi * Update connect-prometheus.md Signed-off-by: Melissa Vagi * Update connect-prometheus.md Signed-off-by: Melissa Vagi * Update connect-prometheus.md Signed-off-by: Melissa Vagi * Update connect-prometheus.md Signed-off-by: Melissa Vagi * Update _dashboards/management/data-sources.md Signed-off-by: Melissa Vagi * Update connect-prometheus.md address editorial feedback Signed-off-by: Melissa Vagi --------- Signed-off-by: AntonEliatra Signed-off-by: Melissa Vagi Co-authored-by: Heather Halter Co-authored-by: Naarcha-AWS <97990722+Naarcha-AWS@users.noreply.github.com> Co-authored-by: Melissa Vagi Co-authored-by: Nathan Bower --- _dashboards/management/connect-prometheus.md | 53 +++++++++++++++++ _dashboards/management/data-sources.md | 58 +++++-------------- images/dashboards/data-source-UI.png | Bin 152247 -> 0 bytes images/dashboards/delete-data-source.png | Bin 88006 -> 0 bytes 4 files changed, 67 insertions(+), 44 deletions(-) create mode 100644 _dashboards/management/connect-prometheus.md delete mode 100644 images/dashboards/data-source-UI.png delete mode 100644 images/dashboards/delete-data-source.png diff --git a/_dashboards/management/connect-prometheus.md b/_dashboards/management/connect-prometheus.md new file mode 100644 index 0000000000..7f5196e2fa --- /dev/null +++ b/_dashboards/management/connect-prometheus.md @@ -0,0 +1,53 @@ +--- +layout: default +title: Connecting Prometheus to OpenSearch +parent: Data sources +nav_order: 20 +--- + +# Connecting Prometheus to OpenSearch +Introduced 2.16 +{: .label .label-purple } + +This documentation covers the key steps to connect Prometheus to OpenSearch using the OpenSearch Dashboards interface, including setting up the data source connection, modifying the connection details, and creating an index pattern for the Prometheus data. + +## Prerequisites and permissions + +Before connecting a data source, ensure you have met the [Prerequisites]({{site.url}}{{site.baseurl}}/dashboards/management/data-sources/#prerequisites) and have the necessary [Permissions]({{site.url}}{{site.baseurl}}/dashboards/management/data-sources/#permissions). + +## Create a Prometheus data source connection + +A data source connection specifies the parameters needed to connect to a data source. These parameters form a connection string for the data source. Using OpenSearch Dashboards, you can add new **Prometheus** data source connections or manage existing ones. + +Follow these steps to connect your data source: + +1. From the OpenSearch Dashboards main menu, go to **Management** > **Data sources** > **New data source** > **Prometheus**. + +2. From the **Configure Prometheus data source** section: + + - Under **Data source details**, provide a title and optional description. + - Under **Prometheus data location**, enter the Prometheus URI. + - Under **Authentication details**, select the appropriate authentication method from the dropdown list and enter the required details: + - **Basic authentication**: Enter a username and password. + - **AWS Signature Version 4**: Specify the **Region**, select the OpenSearch service from the **Service Name** list (**Amazon OpenSearch Service** or **Amazon OpenSearch Serverless**), and enter the **Access Key** and **Secret Key**. + - Under **Query permissions**, choose the role needed to search and index data. If you select **Restricted**, an additional field will become available to configure the required role. + +3. Select **Review Configuration** > **Connect to Prometheus** to save your settings. The new connection will appear in the list of data sources. + +### Modify a data source connection + +To modify a data source connection, follow these steps: + +1. Select the desired connection from the list on the **Data sources** main page. This will open the **Connection Details** window. +2. Within the **Connection Details** window, edit the **Title** and **Description** fields. Select the **Save changes** button to apply the changes. +3. To update the **Authentication Method**, choose the method from the dropdown list and enter any necessary credentials. Select **Save changes** to apply the changes. + - To update the **Basic authentication** authentication method, select the **Update stored password** button. Within the pop-up window, enter the updated password and confirm it and select **Update stored password** to save the changes. To test the connection, select the **Test connection** button. + - To update the **AWS Signature Version 4** authentication method, select the **Update stored AWS credential** button. Within the pop-up window, enter the updated access and secret keys and select **Update stored AWS credential** to save the changes. To test the connection, select the **Test connection** button. + +### Delete a data source connection + +Tondelete the data source connection, select the {::nomarkdown}delete icon{:/} icon. + +## Creating an index pattern + +After creating a data source connection, the next step is to create an index pattern for that data source. For more information and a tutorial on index patterns, refer to [Index patterns]({{site.url}}{{site.baseurl}}/dashboards/management/index-patterns/). diff --git a/_dashboards/management/data-sources.md b/_dashboards/management/data-sources.md index fdd4edc150..62d3a5aab2 100644 --- a/_dashboards/management/data-sources.md +++ b/_dashboards/management/data-sources.md @@ -13,67 +13,37 @@ This documentation focuses on using the OpenSeach Dashboards interface to connec ## Prerequisites -The first step in connecting your data sources to OpenSearch is to install OpenSearch and OpenSearch Dashboards on your system. You can follow the installation instructions in the [OpenSearch documentation]({{site.url}}{{site.baseurl}}/install-and-configure/index/) to install these tools. +The first step in connecting your data sources to OpenSearch is to install OpenSearch and OpenSearch Dashboards on your system. Refer to the [installation instructions]({{site.url}}{{site.baseurl}}/install-and-configure/index/) for information. Once you have installed OpenSearch and OpenSearch Dashboards, you can use Dashboards to connect your data sources to OpenSearch and then use Dashboards to manage data sources, create index patterns based on those data sources, run queries against a specific data source, and combine visualizations in one dashboard. Configuration of the [YAML files]({{site.url}}{{site.baseurl}}/install-and-configure/configuring-opensearch/#configuration-file) and installation of the `dashboards-observability` and `opensearch-sql` plugins is necessary. For more information, see [OpenSearch plugins]({{site.url}}{{site.baseurl}}/install-and-configure/plugins/). -## Permissions - -To work with data sources in OpenSearch Dashboards, make sure that the user has been assigned the correct cluster-level [data source permission]({{site.url}}{{site.baseurl}}/security/access-control/permissions#data-source-permissions). - - - -## Create a data source connection - -A data source connection specifies the parameters needed to connect to a data source. These parameters form a connection string for the data source. Using Dashboards, you can add new data source connections or manage existing ones. - -The following steps guide you through the basics of creating a data source connection: - -1. From the OpenSearch Dashboards main menu, select **Management** > **Data sources** > **Create data source connection**. The UI is shown in the following image. +To securely store and encrypt data source connections in OpenSearch, you must add the following configuration to the `opensearch.yml` file on all the nodes: - Connecting a data source UI +`plugins.query.datasources.encryption.masterkey: "YOUR_GENERATED_MASTER_KEY_HERE"` -2. Create the data source connection by entering the appropriate information into the **Connection Details** and **Authentication Method** fields. - - - Under **Connection Details**, enter a title and endpoint URL. For this tutorial, use the URL `http://localhost:5601/app/management/opensearch-dashboards/dataSources`. Entering a description is optional. +The key must be 16, 24, or 32 characters. You can use the following command to generate a 24-character key: - - Under **Authentication Method**, select an authentication method from the dropdown list. Once an authentication method is selected, the applicable fields for that method appear. You can then enter the required details. The authentication method options are: - - **No authentication**: No authentication is used to connect to the data source. - - **Username & Password**: A basic username and password are used to connect to the data source. - - **AWS SigV4**: An AWS Signature Version 4 authenticating request is used to connect to the data source. AWS Signature Version 4 requires an access key and a secret key. - - For AWS Signature Version 4 authentication, first specify the **Region**. Next, select the OpenSearch service in the **Service Name** list. The options are **Amazon OpenSearch Service** and **Amazon OpenSearch Serverless**. Lastly, enter the **Access Key** and **Secret Key** for authorization. +`openssl rand -hex 12` - After you have populated the required fields, the **Test connection** and **Create data source** buttons become active. You can select **Test connection** to confirm that the connection is valid. +Generating 12 bytes results in a hexadecimal string that is 12 * 2 = 24 characters. +{: .note} -3. Select **Create data source** to save your settings. The connection is created. The active window returns to the **Data sources** main page, and the new connection appears in the list of data sources. - -4. To delete a data source connection, select the checkbox to the left of the data source **Title** and then select the **Delete 1 connection** button. Selecting multiple checkboxes for multiple connections is supported. An example UI is shown in the following image. - - Deleting a data source UI - -### Modify a data source connection - -To make changes to a data source connection, select a connection in the list on the **Data sources** main page. The **Connection Details** window opens. - -To make changes to **Connection Details**, edit one or both of the **Title** and **Description** fields and select **Save changes** in the lower-right corner of the screen. You can also cancel changes here. To change the **Authentication Method**, choose a different authentication method, enter your credentials (if applicable), and then select **Save changes** in the lower-right corner of the screen. The changes are saved. - -When **Username & Password** is the selected authentication method, you can update the password by choosing **Update stored password** next to the **Password** field. In the pop-up window, enter a new password in the first field and then enter it again in the second field to confirm. Select **Update stored password** in the pop-up window. The new password is saved. Select **Test connection** to confirm that the connection is valid. +## Permissions -When **AWS SigV4** is the selected authentication method, you can update the credentials by selecting **Update stored AWS credential**. In the pop-up window, enter a new access key in the first field and a new secret key in the second field. Select **Update stored AWS credential** in the pop-up window. The new credentials are saved. Select **Test connection** in the upper-right corner of the screen to confirm that the connection is valid. +To work with data sources in OpenSearch Dashboards, you must be assigned the correct cluster-level [data source permissions]({{site.url}}{{site.baseurl}}/security/access-control/permissions#data-source-permissions). -To delete the data source connection, select the delete icon ({::nomarkdown}delete icon{:/}). +## Types of data streams -## Create an index pattern +To configure data sources through OpenSearch Dashboards, go to **Management** > **Dashboards Management** > **Data sources**. This flow can be used for OpenSearch data stream connections. See [Configuring and using multiple data sources]({{site.url}}{{site.baseurl}}/dashboards/management/multi-data-sources/). -Once you've created a data source connection, you can create an index pattern for the data source. An _index pattern_ is a template that OpenSearch uses to create indexes for data from the data source. See [Index patterns]({{site.url}}{{site.baseurl}}/dashboards/management/index-patterns/) for more information and a tutorial. +Alternatively, if you are running OpenSearch Dashboards 2.16 or later, go to **Management** > **Data sources**. This flow can be used to connect Amazon Simple Storage Service (Amazon S3) and Prometheus. See [Connecting Amazon S3 to OpenSearch]({{site.url}}{{site.baseurl}}/dashboards/management/S3-data-source/) and [Connecting Prometheus to OpenSearch]({{site.url}}{{site.baseurl}}/dashboards/management/connect-prometheus/) for more information. ## Next steps - Learn about [managing index patterns]({{site.url}}{{site.baseurl}}/dashboards/management/index-patterns/) through OpenSearch Dashboards. - Learn about [indexing data using Index Management]({{site.url}}{{site.baseurl}}/dashboards/im-dashboards/index/) through OpenSearch Dashboards. - Learn about how to connect [multiple data sources]({{site.url}}{{site.baseurl}}/dashboards/management/multi-data-sources/). -- Learn about how to [connect OpenSearch and Amazon S3 through OpenSearch Dashboards]({{site.url}}{{site.baseurl}}/dashboards/management/S3-data-source/). -- Learn about the [Integrations]({{site.url}}{{site.baseurl}}/integrations/index/) tool, which gives you the flexibility to use various data ingestion methods and connect data from the Dashboards UI. - +- Learn about how to connect [OpenSearch and Amazon S3]({{site.url}}{{site.baseurl}}/dashboards/management/S3-data-source/) and [OpenSearch and Prometheus]({{site.url}}{{site.baseurl}}/dashboards/management/connect-prometheus/) using the OpenSearch Dashboards interface. +- Learn about the [Integrations]({{site.url}}{{site.baseurl}}/integrations/index/) plugin, which gives you the flexibility to use various data ingestion methods and connect data to OpenSearch Dashboards. diff --git a/images/dashboards/data-source-UI.png b/images/dashboards/data-source-UI.png deleted file mode 100644 index bc07237847635ad2692cd948fbc0b41ced2848e5..0000000000000000000000000000000000000000 GIT binary patch literal 0 HcmV?d00001 literal 152247 zcmeFZbyQnxw>}CK3KR+finYaE0|iQPDA3~W65O?DAV_gqpcHo~6nCe^-KDq(FAl*q z@Jn~U@0_#m+56kSG46kNTt-3`i#J(o&GpRpopV0V%m*cfw~rq^eT0OB^jKO-LInxw zAqWWxbqeDlVvoeYd>9fEvbL4DxRSKEIHi)4gSnNh84?m>l-(Y7 zmEEMuu+=;Ai3|%IzFR`8zoj>!6M0{`<|LjH#?DqI@jwxSN{Xx_(MnxgWty1L-(kP_ zn#@bZp=U!ka_*ho;fC!dvrrkf0j}xYPXXrGx+0=&bk{-S1vIa(o!9XLw9}zCe2*m? zf!~*pGS^dk@`mKPrSJ^!-xg;jF*Gx1INAARMhxW0@7g@%^PX|549jks2MC?6g1B%_}P zPEoSBi-`l8DzlK)i{PaP*U^+@^J0l?x(fxWN zORLbrkdQ=?q$R}Ez{q=P=w44GpLH8ZZ8h=6m7YSgiF1gV9-;U@iK0ds6GN6neR8Sh z8TCo#kZk)4rM&1zLu9n=G04@=B-8k&fyE8>@1j!a7d;gXRa{k7_el){Lsf?YoGQWz z@TebB?@|Bp(JoKPVLtfYQM3KgKaNEVkU*h(gY86#jQ@`h*aN>-k|&_s_fhJfkud+^ z@m@;q^*??hVgL*m`4RALrB~fPG9Q@P~8p z_elSE@LS3E?O`oamE`|qCO=EX_wBOP6ey_wHz$g)reyTlKbR5;Qv!nW1e65_P$K=) z;gNmuzQ9(2(ZwPE1Qg0Sf-f-q{SYMNe?0tU7*Y_iZEx*mxO6yG!*?$O}D3jFh;c5=HvQ!{1Shu5=mDYUBPBIz_7)03`dEfZs3{<)(v4 zNx;8G{T-^`iy(k<#2`N<`3Ix`eVBqoy7}KFj!TVC{u(;Vq;#Sy8>IWMUZ(tUBPeZA zje8=16~;aB?~%?3Nod~8FdH%b?cH<~N5JhFhaCMM#tD@{C1;AD6J2GHqrEtQMimi{ z?7U_A8fEv?w>ph~cdeqQbO5ev zDF0#V?-So4Xia91q;*TZoa+e6lASnw2c`OJ^9tG_P@3+B3%mY(LU?bH>ca|!luEXp zWIY=Ge(s{o`sId}?Zp{?9C3LYUx^&(hwL}Ewz2&(P?O+HZVfdJy0rLO*RDSL8vlL%WA_7&K_fbE%J{wOvgRO`&>1KN6lM%g!X@+k*G0 z`qxfj9(?|VTKE4(?f;&S0FRI}0uvfNPWoq*zmgS74BpPyISd405wG`-etqH9>G<7e zEm)kKTu>J^_2#OqErdjGrsVkk7do&2x~YeC47jA6pgfB%=Y?z zTyHd_TmAWjc=)C!>Ww3(@c^_5BH7|I%Yb%5tYj>Qwd?uBfc4435d-o^_Pgx9`oDIe z_o$7O%P}x~apS=J7r{E^Br6T+zIPt*LZ#+LxkPT|X81t^z#|D}3vfKx3@7E1X-CC; zjbXjzOjEx+%t3^Lf`Yim&lZbylLXw!rBh#3KqCnS;th0hM$OhW4Yv7LW>*!otO`&m zFgU=jdlS;<2lKnrZsjlV4YN5v{SD?Q{TQSv&B5$$uxW#h)9vxOwXO*Q5)#J!`P!K2 zXWSgp3&Lxtl=qVo6vEz+ehqA-${Su9d^{a`)@pZ#jQi=4` zK(TEePygDVe!Hz;5vtcQxMGQRO3iA%IW0~t$BA%z@@q)vN5azy!Ony@7R_K-bSNoT z?mBpzj!Egu`}6RbCP>}w(PoB*(6B&n0@qudwllYa#+LiL24^Dga1PDx z%GcWsd+hbVu*D$w3^K(;&G(Wx&jm@c}IV(X;7EHpdomg+a;h7kib^p1b_(dgFN z#eHQCPpOdjfG>S!J-L3o!BPqD**{+P+r8ns;xg;c?%=0W@ZUX52-vwkNKAY&JYT3@ z)SKP6^olltO|^SvX3>w=`;#D3blwkjy#}8l{FBE0fp72DWbVAKXf=oRoPO?+&eqt} z^^|MF#+cPBV@*|{g1&cWd3#H~t(jNEKz)}jn(8vO0+ncCeSYmG^9W<#St=16y)&ay zz3{y8VVPDhDg&s5`ZZwPx zT+vFem}c!!PBKEI`kb7-o8TdcJtpsCxY}-uAzR|7wCywFSMOy1K4yX0;$c+5R=rY+ zri+WUCEpa< zkbM}i8G!M;8)~y8rnpdNy{J>F*JK@iR;$wWacn=J6vxgV>@0)Hv%j=UnQ(VA(vSP@ zp~pZhdr60bHp2wYROj|&UL=0Bn|2@8Ch*6V(%ZU&Meet9;T?~N*;)7btEUNW2)ZK^ zs;$O!Hp-okRwYABoZr6;C+CxS(G2sK9@DM%7@Dhjv3`?n(;Us57IfGOHF=-`u%Y3mOzfM!SIiA!6Md9qv$@a%GiF2FHpWBT!I2fQKqpdkD zY;pS6dQ^T`l8d$)&$FR=cREc&7lMCJ+1$$S238C+fs2qYQ~DUNUt&UhV76T6{WJJ9aCRk4**L{<+dV8UM`3&b1*0qH(I4Kz@wlCi|%Cf zJIW8QgyEM2stZP8XHt7vVSJcTEAvTJgPG8Hpr*pZ(_M-z`EMDU=@5lY#c0|O>gLCt z($=HLPE-=artjU`%SS|rxtJUXRRli7qPHV1k8BuTEcpDh`&?`RVn}Qd8mjI z-jJ6aCV5(;#-$(mvc+g$nB%gY$FmJ43Ptwq+GM0S&BQY*zrLu@7ZtB}Hh4X;bFVCt zkcD;J@paD(|1jhgwvkY5tG15E z=Bt+JO0$97z`iMdk?v*&EVaWtSTdfPQY4Xz8Y5=|N!3MyHx;{4f9r10o4V>VS+g_@3>;l;1C+_%~fsND-W+bvO2?~UQH(I^t$d;z1&*fOfGoR-#8Dh_;-AlJF(7Ap z!Ppx3bK}(>T%^LrfqL~W(m?|{Y%mPWAq5r(S`$JRHI4?E@<;e~KSX%dGhU^XdD{aW zBtHGP#)k(tzxQM7?upKkeSS@D(nr#m--`CISqO3w|Fp)s-|sYRcO1MSQ%9rW8ePKet>SHpZL0Mw~|&KpZhA@ zE3vgFkb^!Yl1wBQ4ObQFA>ee%$bw2b)v3kd&>XjcMz{B>j4q(hDiG$MhhzJ*Si91E zgPC*RUa6%yy|>4c*&O9L0qkrEtbYJ)DjF~xdJLv(RmqW~(<**W)vr;ilS997xb?x^ zTLsSR{S$k?CYncA`UOD9F>BR~QXJ81p$^f)pJ)h#J1R_H@O=2))0g@?0Y)6=!tb-hTS_krHj zeDhvmd*$MH=eG)}f>!DkmPf`(f=re|Pcif?1r9RJBa$`J4RFaS1M~bc9!K*nd6$i3 z-cv$b5wSs0-Q{6`y?)JSKX}1--uAmN&ZEX-HzzZZ{Y&Zk0-i|2VH3R9iXj&QJDiU* z&1P4>>{1Q=c8wyG!xLA$LMC|aK*3$vRmsgBf*RV*>4jyf+9BGW_dQwpL z?hvw=&3@dNvq2>XmSVSudoL0fEBawt5|lCr9Me0}HIj^WO)?zq5S}CS+5Y zNR5xNE+L_*r^K_#IX##iNTw~*%FN;^crz?& zm~o3iM}R(HxPRDci}(|W$UU}o%@&~bLxErz+s0EA`_g({DuaI7r07n&=%8ZdnZuTB zgAY(@NW_=*RU@Tj~u1ukXJ1Rt1jvz=?R<4nPJ*GvO|8}@~D9VwTEPROvyyWvKU zsWYHKb8(mCcnhG$5I_~Na!pZCV;xRaomR2%-8n_y2J&-|RMtg|S}-p>=^}vdQ+|nn z1CicHARdc;Y4N~-cy%Uh75_28UuR~2q-*KOC_coXO4B~u#q+b>=@M@_g-DgK*k=y7 zyMcD-RJDe_EbksnrJz6*#q2%S)v$RVWz$XDs*fsX3ISkvfnE=UKisU_css}lsnch_Q!b(iamB9 z*_8HyfNt)}@jBR)F{qKJ1lzETB@=M#Ek~Y1d^B%h_3!gO;elkDZ!0@L5WKaJ#9+gC zdN7zE?5-Ii?LM7wb+QrTmx1j92#_PKl27FDgX*^!H6=P)dBhs}5o?|9kgq5C%Y{Tf zpYo_^{>aI}VKSOMTZdt0LCm|iIoPbf#c5?8#%UUufJ8`-{ZM&~q#d@h(Leaj*>yNu zzp4pWUPr=S9E$sp8E01w8Eg6(<3d>B*oN7!}+`d#_^FFZKh_af|u&J*GRcF+L1} zrN1n9po{G~h`JOgHT%=mi*kr*y?|VD?7q(2BPokhstkG+!6S``s5JIgmOf*y6#pDf z|JwBK^%JM4_h-VC!EMVGCPcwYwL+GQ!k#oEUYPjKgl}f z1gfS#zz(DCVE}-~eo*VmXh>(@JBT~>>b&8wyr?PX^b_%Y8cykWSTV@r-K39Z5KzB< zi}|V`&*NCjFK=L-i99etDKII|$JzGpN+W;l2U9L@ZmT)#>D?{5G^Igy_LL z94My)uS0278%>T^P7gUpuC0VUl}lU9!C#<={_Pv?|3I%)BJ%& zT#JK7&XAhIe_DE;`m`%ef>zTtu^J7FSZ6ny@^RRt2^pz7Q9Qy=6B<7h{fPq{!M@rI@{@ex&VxEgh|_GVYMNmwIe z0EcNQ>N5!FAm%?nQq+_V@bm2g=XMz8#FIW%#v|IefcJhWWIX28hK~U=L0DoSj2YkU zF5(=ALt$&NqXv}%V>%XF!eLS_qx#`gA)1q)BMECgzM3{MvDFf&>o;f924`76bLF&Z z0U!-l{oJnTBIZ+4`fo#~FX^)r%Jede^lOd3H%?X;2!{aFPd0~R)QdD_jW=9RWO8Jt zauhR!*{0(d9@TaO=znEQ>+4Gl8yl;1oy{EeeFSqvSmomDy|GL&!#cV(T>AB={DE3@ z8y}F{dS?+cbTN_Xz0w6TiSdi)S&H>8VqtYO_Z3T#0>sV`Y`oQJ@sQ49-zxK=r*$6h z+NXstw31R{VAo<4tx($;DGxEaD1=1|TwVKjl#GYFKpL-OEBw*07&AABg=$;zL9!UC z!FOD>_pQiG*@C==hM;;E(yODM{X*aSID9%8DfOegax_NOgtGY@*#wyY&$F6* zbYNc42{A@v0woH)Y`PqC%W)x5$egR?q0J=D6`sOK;3u2OtQ;Q^2ANo;W)k5A{PgFK zz#fJkO^xtcd=+v?-QHq@;`vX__CesybL&U=`p_w>C;HES!2Yt>_or^xta)As9sWdp$|vT%yKE9boiA*_e1st0(&1GJ`P0g zSlwFTk+W{ldF8$QMX|(HzMOpky?@Q9v)rg2o|s>1P?p}2vx4mZ8C33VtwNd=gXtQi zWCafD5q`!m(OtwNEoaFJ{yEd+oVx2%w5`jJX=gc?{lRDUq?N{MIF;`_Tj^r7%x=E1 z=wv97ufxR0;CjJCl3bxwyG|pyWm0bQ{bsat#w%j@Q4fVAzx!w(iEfkUsmn06)`(;y=+L>1HTCm^^M52l`hRgQT;Su0<0CnwrfwcMx=Xu!`?$wlq#= z;b-KSfq;j;<{Lql0rl3C1#A$td}>Xs@I#SugSmXPHKSgeLGl-U*+xR77F7w_Ek*$# z7&ILn^u(kmGNER7YBt6dTzwz}VOIQvg|q)U#rI~P*J>fVq7{M^Tn+NR)*=_l&0^6k zlgVRGW5YQ=pFJeZ3auhjfzp%lyH%GO4T~>t-;9R*R4U612*_pTo?q}d(eq?W6@qBY zxo%NfDq6ZNbvrIKdymz63&0U{7YHb3M_o7Tv4hL*kk^rSH_mCfQ)j|~}L#>qJewd}T&Stkk#uV3J{;X?K(VGO{LtIF!ia|o9PP&Q6oGus>z_s99MO)LxjT{>(2h4XSG$FV0PO=Y zmF1`3%j`Pl*gRq{&>Pj=_Mvu|y2EEKJD1Ik&@X-p-Tr6yOu!s1zY1E7V$r@-rYg!O zNk@!xjb>G+SFnmNL9YwcrG&zloHvID+6-_#d3@650aGS{tpU?&{-~H6y-^*gcFVnq4i=X{=>4knX&*v43c;95fpV=Qc=KE`jBPi{w+P$%3UH#?9s;qh1% zRy&=qJKQy+?9%*gjT(TOo)5!c-H}6-tECJ(>M3G-@ntd1r*o_$cBo1P-`%3JS*~>2 z5(e9?Zz)q9bNHc3Et4XsvklT6)|YXA?I~F}$Sr|S!}wo;8I+UWW3I$=7`@&uZr3sL zexlXfc$KYK21TrU)!UiHMgfA{e@krrCO{nN0b8l9CitO+sO4H-CCNNz`#kxY1~4bH z6JpjMMgw|jCjSv1h_C`o4fxDqRfwGa0`?qJf)HiC)*%KRms|=|LoO#O`o6<%u?eDi zcehD#^+i`xyV*FXgO42XHo-Miwcd(v*529AG;gbhr??aXhW;)P4mCv4Tch(wuDb4( zrgYnziC)Wjo)y0s|MhL;qd;QB35q_UY2^WBX@wlS{WXQLJWJWCWHRzbq$19T9x2o8 z-Vg3m8aMg3JmVj$yo5Un=T`wFsK|d}1%3xs|DPmgMOWzNaFJ1dBjrE*h7tc#4p8@b z{CHqrvJZo;IF5E|F-l0JMF--1f|dIC4zopH_|$u_h%{S&Y01 z(*0NtkwQ?F+@4hh3oX8i^uAo2?WG+b5!Rs=L3R+qwVueQyE-BM!5ho#;8Or<$}QHa zU_D-<)Rp>EJvu5Cp~H}ZirJF?alWFb)Sp?kUnFr^MA0fnwXbv^LYr7LOZA9CU-ZS~ zeJ0^F-a08z&5QqjLL}Sb<83%qF2A)U6}@rt1{<-&7kw8nZLv86w#8N$>nb_O=eT+J$x8z3aMhym7Br3tnm# z8Y@tR6lfQHI(EB~plnX%J1HAz9C4x!B0huw#r#pB7Ul{F3ZGHFb$;Xl#_ie8RQMPx z*!jOM_tyrD8$*J5B}dc`C7My0dbZXsW%(`AzeCb2EWb?gw^~^PE%)j156^4{_UG%w zo8V@$!~;;F23qI%;&^nq{GjwGYy`XL^G4?>Uvd(qSbOg%pw=I;a2F6PZ?>r(qP+Z{ zcBf1xfJuYJ^sZW4MQbc~;TU*t-uPU_ebP3#so{6p7MWJOd~r%Pibv54MWKZDFY-b1i0tlHnC5CPiSkJu;H73*|iM6ydz@8@G4 zzx@{Ajp=eYI$wYFfuOrT;@Nb`TIif~B1h&EMkOf(;UA;~jG_{MD4p~o4zw;jF=+pS zGVm!8WH^nXjIubHH=fbxGKg4@*KW=w8Ub+!cG91MS{djUo3@edTW!yXKu2pm?#+#g zJs$`drr zQKnol_Q>`{ewmqfA1*Gp7tu2OiAGp{bH}D?v)2{;B|YrRtSq0?jt}N;Q?k?D9)&@- zjOE&JHBeC3*@1+3<|1!JD2mI1y=8iLBB{+l+pX>YH3LH`IaS*Nif>WKW0S7Gua4 z8@4=CxUJ~-=iFn#N2}V?ZCAk#6E&YIRGU2Iw_zX#9@10|$XDYPvzZ#3>GepFPqHKA z`Wx**C;UT;zGY1siccG%QcD!{FZSIIp+edXZZ+7Xw*>(|)V?|5S-}-_r2!&-wDR!_ z%wysoRQJ^z()LSoBcAuKc{!bZ79UCX&as_6X7jv~p&?maCHKMJN54!!KL$ zKGhW$i-^7Id$$II4%sF3tzQ$@?gITj6x8J<{pv|-uochwW((7ahHD>+fJSV}a z_sfXzG)jgy_7@4n}Ijaf=Sa>hepYgx(zu`rB;e?P{<7M=DZ(Bj624u z7L7=*DUt3#%y&tqEw%<~1npq!SK4Eg4v~HtIHUd_oj0^pyGaC%n&I~8KK{^bBbz`p zIh1x)^Q(_FV|niGLVe~VA@F)@|7zI;dz_erVGoWESnn7@JPU8n9=aBfV>D>+Hc0&{wKW6CkiV>_cI-+f*?7sfsAw%!#po5t@NcPvwEd1KE~N6^iGge=Wj zy93E|DXGebUnRpdUtv7?hBNnwYM2DIwwVH_9C|^&xS*FE5gNTySI{B^iUY!B92S+b`CAy8RGr_P$5LE&~o2|6DTDMFsj(5}CMtAuMLW?HCI z5=``JzSf4VW$pma>~_}OFql)n*zz4)qoH1{^<=mD_^FoP`v%XUHy$_p^v4bO@oA53 zj90v|175`ndOS+vT!XO9@o#qB(0#wY@kXrR$CbuUlPxz#?v7lG12;~PE^ID?f6@h9 zAkt+0k`c9z{&0P8=A_apz3_a?rC0f2I=lp71RJOT7=~nGF*qab@8fI+U@-9g8g*xH zIg#&^C|svMIq>IfGgJx%dVhTS<0GQ@+IP0gYdZL?LZwNEr$FWV$85y}g}d7m#Vgw_ z06aF)`KUeUQJ;+~DCRk^iI(C1PB&K}!(1q8CzV(P+#korz*DFh6Am<0yHYHa&O9`| zNEhO_LGXPOxD{E)DW3d}2(rF-mDB(%rc-Olq*`PjaIuHXhLzO&`~CfAzLoF_rSIx{nt7MxXyV zt#q^mg}ddFn9!()mTIXf#KLR$<8aUt zsMJdmuEs^K(n@d!?)TbDjH>S)P7{&tmPlac>YCN6{{)@Zu*P&uo=q}g?T_b7QLg$-8z>|G%-|zD_1e9M3;8} zQL&;^gnAuM=?-Dq^c0{k*&9qzAxgb&>1oNBmu>vYyn*yS6? zzAAXPjQ1ci`|<9y^lGPcwp^0aSh=zZn~ErFqJTri_55v|Qq#B`WYFzAHRxK%kKa*@e0CKnF|KHO{Uug3pu;v$LGv7`WfQpt$UM z6iz2O`OO;Im~*}r5XzzDq1&{wK4fvwXM6O0M)qAWNoaG2#gA0Jr2%O=EisbPOyRGx zje05;A#c0&=mNNH-{~|*&`v?anZmQTLoR+TPesP{L}Hk9ek{!1H!B~@N9~&DnN;ZLsLm!;pEgA$*Tl%)5zMcl6`iQRfnKqEU0xKa1 z4hOo&x0eKjS=o)Qwr)Pq8rgam*ZYb5z8b}2xs0s zJ8FrVquJb><>G%n$$s`Dn59VXdm-g>0Cv{fD<`V^hfeA^Rkx_#@6(Nmf~x9Glrcbb{?gl~AuAiKHcWD0 zo&ubv#zwtnOlZGYzfKz>APTm;H4gx2yYDUJ-Pd+3genx(#DHgd>Xa!J7`IqlA-bhdT|BcWN>;H4sf_$ z&J$-8Eu#g%R`<^QYjSfH?}yd$>JnlT(Q!$`aV>1dvXgMW2cLF?;1-~#yuK?r9J#|h z)DjbDf(wd*I*3=+^gfJU4u@u@x zluoFxSxxcRkc$lI>h3=qHM@xvktuq4C^dbsNH!%_QD+6q3#i`m7wHz+yNKepgUb1+ zyel#~Y#x6TqWzu$eaHXtwdofq5l}-ejQz50=He@^YjVRqtOF@iorn#@Di5_TEDMl+ zcKiM=U(ujw=7Bo#Yx&lqT(ez`YvHndOorI|FS4JdIK8`FyoG4EBT_g(6c>Aox}N6; zbO3Hcn7>#9ELli^F))z#!kXvJz9>hV#NwcKRkmuvu(ki{5kU`=niphY+@cUcB`*%JE~V*W z4EOC&XiFc|9@?3+g+gTyTLu=>dn59zx-85WMo#yWn*Gdv`5(`Ot0P;KX7^uCq5TODhhxjdcqp zk5M_n>(lpKL!%)3DFlxmSYN1;<1agkCTTvD%&rANe+VSe{8@H))T8xovbj8$PU^GL zL+r~5<~y;3Pl)q2eQnuzXYEa8qP2#ZF`AC;9b9q1yg=EX>FaaL$@qj}hdJ|g^XPvU zQxSC=%1Q!Q3e5i*8J}nD>CU)r$oFg8UPBw@E=0_({T!;2S!M`8@jTrr5O>+1`7XZG za#Zi*-`}6_#v!d%?_!lUA?6WlyHG6+PlU_XpKWFC2G6J#nag`OFTKCB#@{IQVa8jj zxjud5Iri>2mg$;F9@12L-r%y&-0=oGfq+>Ly_Cyyk0gyP#VINTjJjJCn9)C3SXgel z=BmXNcyjHc^aD4FINcYt=?AE?Ix&Z@J;Q{kz2bhpMvkYjmw5chUtcp!f|{(%cO>;_ z8{~@CcJF;XOX>G5pxh>Loimss=djvaa|yll29$R(#6$>X>t(^b2<~HFU#zz)f|3jP z$U!;{l~sOoKBAoj9GquIO@OFibP}jEI75I zJGt?lRl9OWN&D3YC6x#0=k7@(>7k!>>GjCQ1e%w+e6Bz2`CjZ}pSkbjB}{u3X=4r0 zY!VZeVo|Mnao$`8{8%5F6U)e#JT$u0#a+2=thf*mBBJiehxxt%pX9P0OCZqKG+*!P z%r@g;O@K;uG8@gVh3b+((XLWu?ht&PRO2M6L;`|w`+qMO&wvv^D~#vX%h;xj{L3-s zYwf~pp^eIikZW9!1q>t5=y6)NJk%+L%wHpIP92R3S7`QIg&286Zw#HyOGRyuPk$=! zQ=|RTDkG>;$8QBo0U6PO7eN&Wjm`EVW(<82ZQVS|)roF9(iBGlLa2OJpS7yrQCUc_ z$q|Yp-h$y74M^wsMn7edB2I)&@YY+~hfPgdY!TaGPFZJrZ zf2e0T6TTkIY*)hZ`mU);D3L`mRmK~FqZ_JZzDSzd%Gn;+I+i0_R0sG3`}zcBx^dQ7 z4I_g0tXa74{B>Ph$fPedu;+c5xyA8!o$7^W0T`zARd~Z3+yM+? zrFtGNFN;|o;Eiim)p1c@p9O(t)%m?w&p4K7z9`4e8<;#FcR+bdChKd z@L!A9OBC~CiX~S(tFLc;_iL|%A3xLGd=qg!Hiv7s8Mzd>%bFP!^-dEvJ+NDcqdb-= zfPBy~VOpPmr*-^i`C>x=IV0lFbR4|3;WWy~w81vqcQYdbhX{H{fg!WSd^q*M>-xj< zn}CF}$C5KN<^E=UPfyFkcwG%w>_`MZmiq*1J9MN}(oM7Zv%F~c%MwBZou_C72Y`<*t<7(Z{5=sY0f<<{{UCgJS3dEZ^<<$g z?sD-GEMR)N+#+1g+k$+o@?N97ZgLvlGWkHU`-Qp7lgiskkvkHW+z5o8U&wqfRSma* zF9h|}<#Axl?ZwhKKbG*Ui+h$&Wa9ddJ>d_TKe^Mu*B;GRTaEX= zh2J2wHu1la1&c;%R$aD>W;CJ($_BEfBni5!^Z(LxkuKFumq89aOmM|xnN-aaY;+TM zS8l$KvBc#dQt)Ul3wyY?UHdd;qz6aG`i0v~mV2Tb9?^hGWg};uYX(AMM{}iKwV(VheE3_1*6l*nd+w)XA*`gdlcMWB%gA3YkiiHBS?L zc;2&x@^@d)S=+`nV(5KMJidHmeh_6U+CrJRJdX%sjjBKeYSKRkQ*78+4+~Qps`dDs zcSLqLmC^k2!4<1IE^Y57pH2$Zc7gJ|RWP^BS6P(cmgOnH3v1wJ8T|XnCMJk2xrMDn zr#5yhN9Lx;`d@wY%YLmxDZJG%e4vjP?sw|*SPx5N>dsGTnUL({DMOS&t>DdcT&l*_ z0c0ClwK5HHh;U-#-NGEhGy*kwQ@-Ho-JOV;m{-zUnPY<&SF=@OE+MwOQ*t%G-|TExIBVEWoX4knq>tBZzIX0FxPrQ1@PLd*NSx> zjA#m+&np*Ubeo!T+>r2qjGJo*IQr5A$`lWkxw_>&6sP4ggr(j~lAjuA4eBuzo~EU> zeSIsIZg9?BxMEL_+G^Ygz=BT)?+Ku7aL?z2!~OyCL>9C zHQ0z+IvHw|oqJ1fmVj;9xS-@D)yLne-Yx4iAa_c^`KOw%~7V1dIilV4gfH1P&Q3 zBgV{maT3d4nfVynyl3=se+9V4f@jqp$W(a{GBoz3?OFU5%3(UqTb#Z@QdKl^ys~B; z1-HX%zSt=bVY>|0?xly&+Qs3yKq6OnI!Gd`pv0;JB)U;TjVY`Shz zi=JY;Cy6mQ#lWlKrtDM$P)B7Rj5kyDL2Li%99V5`HZ4cYj91BN>l6d6xxdn`QjkeMPyikeV3M_-IU zU9n!RUinM8g;($B>iC0SDSg*bC!24o|0w%-SA?>6*Q}L+V&G%WfJ^=-_B8&1`{v{4 zRn4kB7c~ik2zDLOn8E5V4Y=LfoBG;N>fSRVa7L6h*zR$n-k{hk3vAABcch)z9=_j) zR=u^J%Fhgp=vMjWXyiAo5T?O~hBdc_&bA|V<9LtIkjM9_=7#cKJR*kp&Jv4@B*-zZ z>#p%GO;F8pC@WV>;pxNQZET4Do`plfCo$qHww@ig`5POU@IxcF0N~i!HrG6KNC(esJG0G3k?+YjOB=)FVf>HrENsb>s+ZXjit`##k+y2 zlH;t83GPv1rOD~Z};)jK!BlR(?>#POJoK_d9YrdZ{qw%ruYu#xS2|xo1Jv=0fPb5i@DiL zP(PCQ;-gJ+9)Lm^gY>wylZ00h;`y{Vt=@$w6u1nkR|9CShvZLWu0hg29{-LB-e+J8 zrkymY)F*341EGLg0${s z`Ev6Jhp>v2%JkYxXx9ahJXtDjZq#H_mCs>V$O@#z2tjZ*@mrNHebl5J%i<2nPY(ne zCd65XQh12K7%qoUS*(csYVc74q8y#0(x5>I=f&yd$6ze*7vk{xUann+Vb;E?zdBIU zIAlY7{zC8~wXUn!i^Mk=Q!s?uIDE~xd*E5SEJB33@$p&@*0uM?{T_nc7;=p&v(J? z*h`~(AAi`Bt}wzFM7w~Ni17X_y$_c(tUm*ov0Tu)^oeXdk3libcqI3NYNGWS1A;A{ z+x?kueR#I4L}tKPmZ5C&hGmqltT{pC&g)j3wi+V20fWBVYf$_6Q3y9nOu-TixF-82%FP-8q0TPKHnSf6<{TUV!_VH(Iw$r$lXU@_= zXr5whDv_?{u)yQ9-Aoa(!@|dixI_b2DSI-HGL+lvYiW_Kk9-vt5P`l)qrqEGzvX^_ zKbF53qCpC-0UuvJ>wf02qon=~6O7?~^xd6G8}2q8~`CUH3KkH0G}n zj7^%YlB2m1LNT?|BEHNSMJM|*1Bi7Kf4!h?1uWH!+hmVAbN7=0A7qsUBbQ-rPUS&=7WBTG^-Y4JY({ak&=RLk^?+ zm=L=?{`wB#(@j~DVY+$y8Iu~+?tfZ-DIXDlS8W<+v|rQCR5*vc9qmfL55{MIX$LKC z(t)jKh_IQ5C+f(}ezS^^9qsZdL3{y6odhwYzQ{Y28KDN3AdW$Af86N%tE)N0P6Ap4 z=bu3zK=h#YM~c8DM!QIHB7e!xmN5YwD;3Miqje7V433gLvbXuZ}IKQOrH$j3X5ZonLa7%Cx4nc!E1P|^s65K672(H21-6goY z6I{CS#v0C@`OnOmdColVyr0jPtY)nw+}*u*?b=n>^(!hff4ovkgLM#W>;cA9Ov*q${#wNJfGyKch5`#15VpLs&3 zDz1Kpve_4|zh5P85<3UTeLpiJ>*t!;$dCt1guYHesi<5SNuDkR3IY>sBUd;KIJF7{ z2;QZ7*!nFlM`uK&i-Pi8c0iNM05t<0FkGWSO!dACdtQbnuH~8=SPKjAKkm9f6($hq zml7xgj$+k;#X0pc7;68yGMO`9dsG^~HMT*p%$yaX)sa9|pwEhen*m6we6$iv(rfm{ z4toH1(^C8@*nIpk=iE{D@Vb?i4oFe7%m)Asz4eBia{4^ja-v3xJaZVNy0K8TzI~X; z$j3Ohi88#WBz~37ed3$q@vtboNy-=;z@3RiTTyWGojLXDV4>=_bWd=UAduA0@2*~d zcU$d`IwBoA`$w+**MiG|_p2P85J%!@mHPX!;x2}ukSRptwChz+@uDJr(Fsp>srI0? zolgK>y-c6=$Y+7Xc$D3ap)39u5(ao)3lODIXBGv`_ZVw3zIgHcSL&b*(|HD~@AHPF z>X4?c?ql$m?i=>GB8BvpVi!#noq@+CLZL5(xLdV=fQIi_)xw49%ko}kjD70pdh2m^ z9YfoB@$gX!;tY0Cy@!cjvvd57ktXI^YI7VNz@t&0gGci3-$;31Qd)2vfy+pFZ7-p2 zhR<){!@DG#&hJ)VXXmPr$@kNZU5Nc!fN;%s+T07@?DbR=m|m4mOGeQ1^7;^Fgsa4l z++}uGUWr2A2d!G=yx&urWoR7GY~)aS4p283Lf3_8$7b)cj(qU zat2I}B@^S>zOK|7)<*TVuyQrqsa}UUq5rXin=wPLyN7=%=YBrEIdfF*Ybf};3c7Rp z4cl=9rK3z)8)A?4D?k8_i=E%6ntHx7%qsW2{5ybrI7F-(PAu7=Z&`NI~nDx)qR3uQd{rGD7$9PJuGQ z=YB6DdV2sc%#7HH9Q7ZO*TS@~1;03>9XeW6g;tx2d0#JVtB>)D$QC@515qzqqsS05 zy-l5mbZr(+&;l3YA%!%a?09TCnKy{2jToeX?=`+N8&G{*_gq$cPc}{K&G|8ib(I-V zQ;nMlr2*=uABpCVtpmn`1Mp-q!jZU&HWI$_nP+V@jj(%&WisDR)r1jCr4LNIhVf*n z&HRKPvOSV4*YR{3*wOK2ew@G0og-_FMo!}ZT2gSgu5R>H|7DRf2o1^OoA-?oE8~5q zVugTqkH#rai zt~cM(E;XKROiG`H0Xt+ZD}ZjM*--S=CL2Dp{-(pR*o}UrVe%uBBn)WTeFwCv=esH+ z{|-}A4i~A#*WVq>q^oI9H~n}lUbMC|`q!}~z==S?)_Se|vTkgFaMjE`Qa1Ax#~p)cLE|> z(t&mAyPCe~kxbt+;S(cHs~LBX$6Nf%BA`XBMLC?pZC6Csz)o+Aevhnn(SxD0y^|z& z@VRz59(Jvx6v~QB?Jaz4XNs6qrsIR9@@ofIXD{Pnqqo+Ry+PP+16DweHNW zW+QP1uXLSj`zbXW@9@Ed>SM@{dwP(zj0U39>l$8*<1O^R6#iwEdc*Dhi;dZ^8H>jS`9T`L9(y}fv@Y+nZ)$h^D0?ir zJgmq+|3`6hUz$QPMZQU3++AFXq1F6v(bvyB^J~h)!1g8;jRp0`MD?+Zhm+(+I(d!{ z_FbO1W5F_}?#?{Kc(QiO^@$JFb`?EEV1_*9T*;5l+M8mc*we2FbxA%IUltpR&HIqf z_>wA#+XHI{A-(8}pUrfTFPM{WWBN`p3!uf_H|%t-Pz?iw)iYwvU*F z$Ju4EDAtoR6&k}sFo zxN}X-D&u7O{D#Nf*^>E?^E=|GQtK>V@Ono<5GJ))T8UmuIYBm@W%EhbQ4h6BH`)?v zz}|;(GIHd8)(XsgiFZ%%i^eL|o+n&Q@GY3mg-|jS} zv*g#J5Fw8yu$NYZ4WnGMqX^(#MfZTy@Z9x+))+&1PnSH9C^UGN%z^Br`zDa!^Au&{ zJ1b4a_D9cBirV?fT&cXXRLXg?#VV&vJc)URn6LVREqn$ny$(YOMKkciP6#?C`&6wR z6ib_eW=mf$PH_Nnom^MWQGc_`5se*}t<|e5Jm@nP6s%qiFKnI}zrM*54Uk0D(|C4d z0eq-1l~k0|sWi}$&LCsN{JC*!*zi&o=Wwzuq9;K9@t7Z3nIPZ~Wvy6^VTtv`)pzOA z?#lf6S_tkCzvtI3s+XBHY_-JM-C{L>y8dm~CQ`c1eypgvyY!ypSk{H6_1V(BzQjyrg{{swXRc|I+4a7U91(~hRL6%9nHb=Lnr0wGB>*sTBTnOeetn~%`@2QozBHWK>wktO-S2K^x{$Y$Qi*$9d z(I`5lqben4iI_QPV+QRcj&Fz$GSIPOT@v?3$Aux4w94#FQFNE=akfieQBG}^>!o)z zhJnuQA#kfsFdW5|pX#Qd?uvj@lrhO6-J!r#E7MlaSAa=As_G-M$-iph5U5-2@HH5o zMu?yKanpwfjS15NLNMRAs-h4$I z1oGP?DkRRHmv7l9Q~v4bajSPlxpXl0Flr5mU|lT8r@7d;DQhSy1-wp`VwmL-3DJEBg;?K&N;`lQmP+y=gR#$1+&)SP;Umd~XPVRCzqu=auf@7R^ z-vXInF*v@f%W^mZ*YqVd$Kr2&uac7$aW?y98apDs~I`s$}w+UqiByfiE8#O&% z+!W;fwt#qE+hnDwypnAFC0=L+J$uwYAZXVBQ11yRKzkh<{DYhXjd=U1!HiaM0C3D;D&Bg- z1L#s!gT`5y-?|rL$8RN;e8LVC@J~Xg^@}0pXNbUOuB1H?NOW^~mBHk3zMH6HH3jk4 z#@O3S7cRQ*IASPK=(*ztPO*5V+Djv8F#ySU`EMRLc~_X>ATk zC4>ci_`HvwmwQt4YtYE5#?$)|d=m%3QMfjz>?WAc*!}osz#2$n+Db z7H&LnYp@(ohTZ$dwH*q2ULR}OrEap5-$B&sGSC}8PTr{(#g;KafUVFR3Z%8gDQ*8kz*{{7QZ{0>-_=!R1@{;g;E&*%G>7opvM zoL&F_`%p3Whj+FT=_$_i$EE)s9w5al!rsb&eX7mm=~qu`L)nGVqRfNqOBCsE zN@Rkb)JYu~*xwm6ga56g{~y1VjK8ry3H+YFVK>oy8TFb|1Wtu!x`OAnW-vG0ZRc=i z=X|5XTAj?m4Q@JrLW0boZ=UGKzjvel^Z)zG&pwC<=3b6FBaRyNUID(b@UkVVg!wd zpV8RHrD6RLN<_$$&*=#jkuHyLFFl0zKNt}3jmfqGexN72i z1Y6+ckUb_zrlxS=2uo&b7ie|-n0_y1YXg%I4-&fgTvaVpu#_wXry^k6v)bc3m<}5t z4M#MSrm1o={jRCSAswmd=Li{}c7*PuG`T&HGm?wv1lGyw3~Dg$|ZCAXDL~hM{8O zm5=Hd1ieJYd`=(Fw;gc$9Apm`ZgSg-q_(_Bp|Jq;rY?AjZN}!K6dL(VoP(70RR*4- zSaZyFKbM!ky6PIVYGp>GbH-Dbs?H6Cr1N;x9L|?9-*3;=!|q5B2&aXpdqaIis5RZY z>Na5TEl=Yfa_;Q6@R@SC2#N0g)`TGUtLC*nBEGXB+;@YX#5Z16GKKmY1^Uz7;U!!z za7K$X_h!&3th!p;%+r&f{%Z&gOciMqi(0IB6@%G=n1_m`41RJN%P4K*JVrq&PN*br zUiEc`W(d@lk;1cLv{1Hc9ph&<&q!km@tC?f%2 zp*m$9JVs(;kkdkZcP$16zSQJSus}1&0Q8bKf+aty0ZKBPgVt(vY%@dR4}p)8S!r~f z;1keabNXTXHRxZzp8xn-7Yn&zAfo8Xr!pysH+UZH_^|_j`#RcSr~O>J33Q{#>?)%B z)f|b*Y$}j1Z9Y*(j=CN--4tWp#qyVP`oM#m$*3a3XDybdtKZK(+;3CPIl>ramgn&U zxdRZ*A+aMe?Z zcnNT3dzM&2SpvaxcAB&0_m^#*(=VWx`~A!n;Kky||06l?ZzsSe?FRRz1md!zm3%BW z3YMMi0<#}$o-*Y?xGYoMOX*Dl6(+N92v*+lj0!s+$b@)&wPx$&LSX)TQX0CQkP zlEi`K_O#!|FV?6xDe#Kn7p!A^aP%X{U%OuhRE-X z7oWzjBvPp-QyI`TDvefgYLC`33wG|hKgueHS)y?Mf>R_lIc2KS`Pw`mqG7ub@3q$9 zbczweVG*+mCVLw2DTGpz5S}5UoHB4^NL-JwWB1L6WUq2|H~hQ6@E_;#zb?TCsZKlV z*50wW$!s-@F+}Fmj4Ue@q{*GJ$KQ_D7eXPeYb3q$DZfOWfpyhX)JYiBHDcYa?exwGR9RqiW}ZmU=m6rIj|dL z&4g#}$85ZSgF+v=^Ozj*Y8i>4l$fa&G?#qym(Aq=;LiWLjvToB8`1xRZk@hd8eGgg znO~@(SV-Ss{H)b+*~xv~K2|JWaxbG!r)-03ngHA79V0%=(X{pgdj^XY`z>R9(mvt= zWg5kUhQ*6Bgrf{ygK7!m(nb0(BK2<~xW(vrjJc@n*ZXlR!Y=xedora=2NIt`-)Vk1 zMk4NZ6KwTSt-vsus+7=!KHh2ARgHyLxIP_AguHRMFi&dfc;V648(P*(FNyBqyqAHe z841PYF-UKIaNtcuf*LW+fVc>o{_Vc_FS*P=FOjEcer#y0$mT&oC+Q#b1e>2k%9$4o z--9Q6La?aJz6wRBQ^kuUgvN8Hx~#QdY@ENp&!#Rl5CxM8(!0%izRfwTl1E`XW5kR5 zF~kT!-U7`upQvSqwfUA$^g?zh%i-hRERe0W=R)mCsaiXKQ=jR~CM-B;7@H0nRu8oa zD;#(a-&1R%hijB)^^PUWP%s1yCj7_?%0w@6b|?*t&y)D*0i4b4k8K^}Lv@h3UxK*) zBgm}JEx#Szi?vXy^o`x6t!Q4vuPnOBx3=xE)g}I4MZTI-|0UfpmeWpm%;TT_0?494K2tMt^^r z2c608V4u9tQ5axM<`&Oi=EpdArwgAWd5mu`G2qwrCY1gZIQ<+=t`v8bz>ltC6tsjj z;`lTs7v#2!9WWaGZ8=npF_JH9Rmz z*nGUZYklDI4REzHyIx=N6lV+{sec=Uwzal&sI73+=-s!AHFEqY)g*RrbFF<+jc&P! zsLte@*^;9d25UD4xf3QXbGB?~uPphD;PwCXfx)9;Q?N+Ja zIQ7S77ffOXfp*U}fL;*I z>n#L+K~Q>02aT&B}tye?DOD}>T| zsX|+%l>H4bNse=vkI82md9dA~AUF!%q)9%xgq`pzonb9kPz?V?4>=WiEe({gS|1$E zJRZX6KF^PxIG!xkAss%rb9B9S7hA<=)@jSezTI86D|APBMQ1Xg0>|sP{Vgm$e@OUg zCGN%5(L+H-Ks0HKEa2MTi5ORuOx@bi-+nD;*LJUc3^bVKj<`;?@8y#6B%@@Zbxheq zt)GiH0Qa40x~{xNtH8r)Pqoo+bzqD^n9JS4)XyY8_j~#_G?&D^ zX2GgREBiTJE>j48yQ0H(I8O%qbJqj!+~a*4fXv{gSFKM29!h$5G~K%+qUQmgn$6{q z$>X|dFHZ_O=6E=R|2)6jf*Mm0!E2WDhRx^UY`~?z?r0`{GIuQ^LySj$RwJ2|wr>N2 z!cQt8sP{02`xjsu`zh(N7mmGhBu({ksrFnYKWWE$Hcqxlj-q+p@+#3ZTDkcTF`?dW znW5pa@h48_(+V@fJebjMza0sDG|2m_)eZ8fQm6egjF`Jzj>$`vNvl;FkTLC43v+#G z)y~RleOe>%c;n-hv?XT5Zq!ZSqK~F`?Gyk!BNBTZ>0rsVAy(GbPFHKW0OMaB3Qh5i zHt2vfU~1Kqjl@7{jQ0mzn0n1<@|j4NkQ1z3ADiv5bSag5c^cN%(4r~xnQ6uojGC{asKsDJjb%9 zn<#eCkvo7}ADzzss7@L-yyN@SLJvr9V<&S5EuOVe7=J&`y}}hjkkF`x7^&pTmm6FzPaGt~jT?_USl78YHx@z(5As>*EUP zm<;a(#6kOYl>XRU0FyBz3&?YBKHNc%Xj*2tUnj7R3&d-$k+RDILK%ikuWFj$gztT! z^nd^A2~`)WNU~{8?JeR>to|7SAF-rkDg0{H!|{=YD1bWq+l#%?Nm=~kfb*XpQ+HL$ zzl6Q@zBO0&Ni!IXQ)OEk)nl`h@A40-yNbMLhuUZgciz!{OsNHjl%CLS2^*D3)xkB8 z(8wlKuQ-Y0uEBVA-_h&l6+TK|I@^Lby={KOmua4TJKv61yhAl3?b&znN_&x7k!bjp zQLDzRq1;AufwV)V>pc>bS_XWV;}%0er$7SLBWnm~dto`9k1aoUa+JYA81NpfrdzU) zx2;xauPk~iAagh~E<2gihp-2c^KTo7;{wa*Rup8&f*BdhBKri)A@VS z*sVQy=6(5ewgFcJaFl%X%;WJ6nFP$=X14GYnXmj#v9WJXYE4TZR1avJ7ff%V@2J@6 zaGbZ=@|m;w10Q<0MMLl+Yag4Ot}<-$kewcEH(aY5j2AwkW*nW-Al+eRr-OLeS-LNi zfAKayd~+dmvls1a+bnjga8Z!{`{bcYn2Bj*-pCsr@^VHnW39^&^A~8WvPXO@17VzU z7a9H2gVAwDJ$?MNTZCY`A%q^J{a9~}IdI@)JT|M7`rI;w?eQErkoWwIx{!D{v7!(I zcKJI-rAo|F?!Z6Y{$r^3P5sx-KK5O?GaC1h9Gh=%Bv1{yN=^J~;V~CAbl}%z^KIUl z^nY5cGD+Q8gM^T`C-p}i%IwIRn(oL#<|-71f~-Stg-;{9FTQqc9j$(SIAe~d{Eg;A zW8R1~jn}SmGNd#swVMWOn}6_dO`Ab#XNj~mDY|AJB8cQ^!dSf!)m<^Uk(5d&Ow20v ztM$#5ou77?5ws}j%o)>5Te)Z7X<3Gmh*f=zNK#VzO!W6vi23aK4Vn}@Imgy&HeBXr z#x~H&V`*WJ5y|cUXbN-%tM76XIV>iWhW8wV-(!9G9qhvQP-#R390q-Qe9_~X!UZ@2 zQQJudKJ`=LNWZeWdJsr0EGP*5K>j}7Ann{=l2Jqjw6#pM`II`@Vl?#RXK~Pjf`L5K z2wK|W4D?!=-)6>cCP6W6uSl1M%rnUvx;?Z-LnQ3pcf2G@x{e`hG}o%NXg9dvWzxRP z&QQY1z11TP(sB}`7|)e@Rl@6{tx0Z|PjZ7yDr{8!c_?Ag*w3rMW|^T4O&5;cPO>@k?l9{dJ`oY1Ep2+no#m>?plQ9XNfy*6KlJ4R?M;Tp!*BZ@*G~sZ%;Swg2JSmUpgI|*lmxysW#i4kKQZ;^JVKfd?+eIfB;E%D+RQ*y7?iY znx^f?EfyxPZ#zk|z>P)c)%f=l+$6`}gkCk?#7X8j*kVYYF{yASy_P}eiMqS4{;6HS zFCGLsDj(U33_9Z8SPs42|^D_8@l3*|Hr*M0E!HUT7_&8WRe z)jzE^-_Nrp;k(ouox3R-Xv)2?IL&%=yxe~csur5>>F&M+J#$Ib<0bgAy343~fo|mU z^-reeFiUA(PPw%yIFaw-lw`h~L3UUz2*NlrU(wMMvYdK1_+<6G$@3~4O#SXG-9A;o zw0V0NTCCe>FEpjgU^OM$xU*!hb^bg?Is-epH2(1a$R z{mVWQ%ltTt|K)gU(-GIjfLvBS4wY<@nbBQh?y$c;U@Dc&uj;Q`iEU^bFg!r$ef zU^E9ER`O2>G#@H%fkIuEzviL@@pTRMK{Fg77fT!J7Z&lnbm*do$H0%ubap6$QZ9;dY<(qA&{@^cYYw+KVG>WR8Jr4l~KL=#qIoS)8sk14yLmZ zAcLK>YBj=H3{SMKHAN!Ql7=VO!2Ds68!!_o1Q>nqkz!xroS6u{tqcp+O=4NSLLw$A zzftPP%C!T$Y6blI#}2FW^JU9#EGn~m$o3HNzhG`pW{H{kgh@|yVSUmQ3A_bv@Oxa9T}_Ha>n0k1?s?k{Z4TGxGGYHm~NMJjqR-5iU?pWpJ` zXMn&SdaiGECyFBPiPG}`DG;~l6^DN7fb(q^l|ouexES_+7~R&<@8I-BJ21+MaHG^H z%m=tuA^()qy<+kz#ZXx#z~g^lbpQb^F5rkHl)|HUvpQ8g?2UsYmO70 zAI`NU`D7@42eyJ<63Tp?(;oOPnQf<=V~on-^7gmM3)JGs*Hlce@%{HaXZ9V`?>r0c zJhtPcTw3dnxQ^~gy_Cq1H5D8WkZE@6-0bn_MQVRBgWpLi&+dXxw18}`wEOd#B(lYbByDlsR3)(g40 zj5@s9|Ai>^wgZ-^p+8tuA$*8?QA%Z7>5^LUFv=OohpYajF}N^wq%BCf8xTBRrXZ|I z;Irya?CBP%;Kpn!`LK0$!ci$S0ePWx!KOYEgA;u(mD*LwIOE&I~G1 z`=@)%v8BKQia1RVs@3iU!|sPkK9CA~j@(GQP)*fFVE2cbALuK!3xB z3dS7?u|P=oC4jvgBf#8HYlov*Va8(miknclpi3x$+UXk zkDy=E_b=s7F@)SDqX=x>k(f1avVS^v3AUIXnURjHgVqP#8U}^yS&H1l$~2;P{yzTm z3gH{AO!4^hTqO9{&Re<42-Elei09Jntrhzid24O=I^>cUiAU9N6}wZpCY$x>FR7gG z>a4DU$FQi}gq|J(y2(?{=+bvk^?*81D= zMv0Hu_Qem#UO$#pC|Jt*Z}p+d-a68_4De3{xJ0qu+TW^lA+u^iC~QV{l(M~Kjy_bG zmTgP`?g|iIr4lUl!nvj;LgR` z`O`b~$wgo-6uVZr*@}i4eumwh-k1TP@+(eDpR>NbA-L>H7PM0HcGzyDU}4?k*`eB< z03ga2k16ka-&?2KVpjJm;6?Qy`FyTbqWw?uS6jMWgin>Oeh8qKDeo-HiH{oB*@~;g zZdFutnc2nM&hyTy)wFU&CMdbxz;9Ly2SQ$!tG<43Oh8q^WwVskV|d9g2?~8g^2?jAx`UGF~eQrA6WDy<9fT`<9CSUZw z50F4zSiI;CT81?KXl2*|0M*NTFF$JFe41vo-u#KQMH>%v%2b`UU zpMzOFS3r%sR|XY7g>#gXMpD{3CA5xQJy}v4+b5M9nm%ut!h87U9==(jDx@5q zd2&Y*#bZe;Sf+ND+HB$`ExyY-p-X_gqdOtdd0nc;*5`6}KCJ3VeyHyRGdwZIckaz8 z|0Al?^SwTvq}hj(@I8wm5u$c?OJdNPsgC?mpe1=F6o3Xy>xtOEo7beuie@E0i>^u< zZwCif3F>D-@Zy8xL8uq;rJ)2r5KJSAbSAQ2cGc8aOnQyd7Q05p@AW3&4i{!f(Fwqr z+OozpT+wC9=V9n|fURgC^sQB%ir?NP7xXm4o|h1|7Mzw|tKh9mRv|^8QYkDyrzQLT zSpOaqPyIV)Ww8I}c`&^HD@(;8mp~`&a(Y$TLs5##<34WeHEWAu!>nfr;V%S2z*{G) zUqLQ_&5DmFfd`mf^j!?`V!u`@##1q$o{>s%?#Tk0ZG&Jv!Z(yc1|LAN-wMqmnLwiQ z0ZX)CX$3V4b@Sdk{Vh7(j#brM)JH>AYw}-Sa(=z8B%}^=m7OmaY?Y|F=5NC&3$0HC zCPy4Fub{b;ZK`8Hr$XIAWLQm;&F(OduZpK3h&rx56=H<(?<`f`$J#eMMt2LM#Skli zz3J9heejurU*v7k1nU#vMf|Ncz0pPYRXHIR4ZoZC+Aocr4`G7cijO`nxGL{<>mr`R z^;nuw%d0VUknUpv>nkpsDG|5FMSe^GfXM(TH>Vwr!d3FX*IvQ{;=g@2- zZ+&2PDw^7avN64yAhbr1wR;;#dGiAb)sHD}9!(gYKW~AIS>Z?Qw!cI}i&r3B)>77% z7|}C3sVv)QPK73dcp~QlDfEsOG1$3vlu62hxAcyvKS240G<4}84*)Dr+eHMBVe5V? zBQ^RM4w?}d@0xRS-yLlMMjHn#sl5f`BWK(L1^K3gt}lUi>3Bn~(e$|mG~g4WSw^%a z14)1H@oW>cA-%{4sSw_kP{OVabmznF6SOsq)n=F4a#lWTZyB~oc#no3)jVwll92@Y zNMH(~($*5Sl5E@%3j@Pg|BkI~UI#_aVgMpV*2U(=8h_8V)(~4VU*1R_UTd>=L9N3&0e#jxyB2YAqZyS8*LsZD%3=4EHU4#@rS044$)O7xx=} zJDn5!i!w9P%gU`wq087nQyXKjH@4Ygb>Gs<`o&lEFzYFYY5}L+TIWwE-nkUwy1o!^ zfu4Mujd8bzc*nlpN;jKeW8(u&+P};HZZI%Ib4%6XUON{m{;;3Ff}0Yd8;B3mhE@m36swcRY2&Kl4`o;Ima14Zh7c|s#7fxj2Ut4f=+uJm=ef@p8@({=nu?zn&rD*XZYO zW7sl7SQndJB3x4~O5e#Z=r(mTn{t&J6u3zsamOdcn>x2zd+$JsqR;Wx4*1EhUhv5o zCM)NEL9{Uqu2p6=Rfc%~N{JDkl6yeVS>M$A{JA@@%4~L$V+K-dxXjy*`gC>}^QMk> zyWs==K;_f*$+Gr=%@WlDc_NP~$*-2agsua@plfv~ zIQ{d|r9HPi><(i}#^@P3*wnWdD7h+5XYR3Mk|`uF4{uwpqQ5xLm#-S|c&MVg%nvgW z4!+8q&7yJh+nSkq&Dx$I(;%Ey4}WZVw>-z@pkDr;2{OIL;lm!fK%03r~iw|!NKZRSbiHu1ar$2;y^3Pk~y*+ zzKLmw_YbM@v`j(tIf&`s_gV@p?E~z1VVdixvA&TO>_Q+X1 zSm~dBEWg1NrNbAJK2a&vbg;(0klI*}H_Sk%vgmw8cpeQVsAHrWIpcA7P(`O?8HuJO zu;i7y<%81lnBOA>fg0y*mJU$*uxaJ@eWdM7u`YzY zPl7U#-vGV4Ea70xcAysT8A!~h0pO1FaarwMUbwL|&gK7}6`EQX%=Ef7n6)(Qa6sE3 z>tE)=p$>R~G3zuVkSDp+6j5M(MAqL%y;A9QS8r6;&_@^Vm7nf`D$mj}%{~=?2)=}Dk+JGF7P>@1 zj~937iQC2>DPg0@oMb)eycc!qt<=inq}?s^NEvT}f>xDL<4#|vWkiFP-0sJh<6Ki; z5!@tSUzG}9$Xk)(J2i>`Cbr_F?8h*f)|K%x!>;bR_t0&9gv~C*@T3u|sXGyxPcAxH zw+7A5%2P$r#f1r1*t1`sk7yv8S~WT1)?N|v^^As|@H&|RX)U9za=KmNTAlKaNi?W9 zcZ3~kN&$J0h}a#@68*sJW&%)Z=jSN_W1nZ>^~0@`GZ$qM&2Ncm_ zv#f5pR2w9Oj%I}ZV88V0^n3jxq^Qg&+9>SkaV^yCQkYiA?#WEbrYhR?Ix^X*Zil-Y zQH2Hl)xlC-K?Z>OXhb{2bUVAW?Sxsb@EB8QumKbOR<9YRu4thxdW6MyztS=bKraV! zjuzQ(?`i~Gk3-d4H#|iK!@AJdsAKOZ1j|H+4jDcKN8YWRtr!Wt(G(c}1Qv->BN{S( z;T&`h>S_bZY4a6&PK!I+YZ%)t%QQ8b^`d=pOf7~?EPu1UbOwBQmg_Y!`JJsB>hX3A zJ)DHfo);JI?78iA^3GN0n4%W~tHD{-Rkk0`@~6FPpZ9RY;0U%9fN1QCBQp66F_62- zai_ZB1Bd%5o1h;p+y<5P{21wypk{SJhRQ{p^92#>5BLqkURw^E>tq^9FtMK_VmN*p zHy`{^!plr<#N6f1*SX!`y!wq_p`&nhm}U+#4f2&Ho=$3)d43cmmq}lmtvh0Lz7%csuHDR? zlh$2sv$+nky0t*yjgmvMS8}QQfqJL6##SkiZ zspL-gY$!NJ#13eB@7={V&1EFOuWh|(7S2BlCx+ODL^V_=TklomEE{n7&DcEVpOB5E z5g+;*z;UY+EJi3KNJgbv^~nVUN4X?GI-b@XkKBZtZW)B5dA;W)6ZNwO;OUW=PVN{> zp05zwb8|^XWxcoUx7Uk&Z}s-DB@iWZrT|ZE_0Z?B=);WynkoN4TiWB7)aG%4-(Uyu zENDz{U5Ue0ae3J_3yQM^RMiJ%@gU@M@b+*X3Z)}rjaca;*_Q!*Evr%G1vvpUFXTm9 zUwWS5Kmfl|^RG5wdSU3a>(>Hj_|*pIgExY|ECIf<_GXNz5W!^-SL3u}7mR~8KSV^v z>yuVKWTi(^_B0|VnSG;zRq67Ar`n%Cj9DvOx?yd-~T2I~r$b6XAqYM8E>I-OC zTUy2zUmJ$t&*yy#qeO{6J)^Weedhicf<>bN=d$vJ5?;LPiFT7XTFdx4+T+?iwA$tn zZePe|<;1j65z(;z?;WyJ1DZFDCnUD7At3m1(pjP@M!Zt1& zLzd1wjAs|`xBMHax!n$8>v2+NZ=GGNHl*coCyPP*Wa0cf0LjN|R2Prm)pmNaT?m$F zh{G+Zof3Tzp**s8kvNL+PPgl(y1g>VW@_8gGmnbTeyK!~M>3emVQCY0$oiRt8cyE0rNv41+xa{Q5b;t#@r{?v3i3tZz!lX#W z)LANzmD?RyqN5&JE4(ugbZaY08v>gRp%4^^?(entlYqOI?g3lS|}vTu%V#bzp; z7cex3nNh=2o#36FPeZQj>e73kO#dovi9o@jE#j?O(k!Z=Q=}8J5mR2Fh}fy24_Y|G&~D>$Y&~ksA7au&^dkP8|kOH379H>N4ImJA2DV=uzfVn z@f&UP#8%ft*V^DuvYv?dqI#2js82b!-_mV^fW$H(>bWzcggP_(mxDy=AF<#nt_|ge&IZih&gZw zm5up$#s^4%vEk$!jd??}gnUrm^6Omh40WfB#^Hl$gCjj)s>>YtAb22mi6v)+jqqSH zH$sjs%G)^-1-qDLmxOJrw|IA$wPRw^aeKs+9%DlR&41D{Hj>bwcDG=$#w}@W?*u6~ ziai`vMXE{J9^bBfek2XGGQWyZ5A%ugYLBI3S#neGNtY<8XVT&2spwZ_t((H2jmNLY zYjvuuXMR?@LzDb(d{W+_xLNz%vX1=XxJUlj@?;|}QzAJ5Of6Z@2sAiaZU|u<#$33G zf#R_*fJpX?GA8p!wLOO=A>5DmlcI;sX&l4?J0NnVHKhyg!Cyi!_P8Mv;l7Fv>H=VfaKn)h^-+EZH|P znN&kT3a^~n;0Kq<=wcly9@7>%T{wOWzZ|FHtDYp&dfe3OLxTfS>qWirdSw>zRhmca zjH4r${X1&fb9(bZ72=ZU?-TN2xh(nUBURUP}=e-|cF>RUzG3Bz8lDAd;Fjrkpmo z?}e$9e+p2{baJ)qLO=LaRm2;moMKX`Wf~()j)ik}d%p6}&&$bb!JFcKUYZ$Xk;Shu zAa_1>+P$C+F(H*Elk?nNrj@4qyZ4+F?qoF3Q5gm)Bf<917e!N&>p(AEn~*Y@wi0Dxc1guZ?()_Lwss!48Ec0d;2ytnS#g$k#q1ws-*jN z|CzqeNMP;_!51Ud4AZeR3H5UAJ@)TaR_u>~uaW1?y1GrtJbtfz(4xd;RCBe&585J< z&*0Fzr|d^oqiUp!#MlDRLjZNE!FDP>-q_^w2g*0vkJ@eI!h)cy#CMiG4bbDJlFIH- ztw_hf8P$1IQj}PrpLhgf`IDqB4`P%r0+4@~gelocO9;t8$lMKzYOM}00`fJP6wy5t z(jzur^))W#I_92x{WvcZL}Jji^?~Vew+l+;%&FX3>0%!elctK^PT@`Q}0 z_8dtlI-J}C2UaOJF@u{%Y|Xk4_Mm|$#mZPFf5CK5&2qW76wGTg*#iP&Uz*JpBJd}t zgJ^$laOm-5oDY6oK3hsETFS$WRPFCLa+6Pvb7mW)Bcj@nvIT5CoAt-AL<}a-$rZvO z@-)hqbI#$v_TLZ3Kx4vRAQqfk$;4pQe&wHg5QhTj%pZx!Ju2kD!^wR3(=6{RxANHR z59_nvKWRc|%e&5ibEFsTCy%LZ<^{D6PJpb^Jga`I2tNMF|HIx}hGmtt@52Jp2udnQ z2+}Dj4IOJCFX(G***!$;;p(UJwLhCbqyl@Mk-zUOB84)@M+9uG)3B*2gW zwr$=!R3&U0l^poxJk?rNPm+08Sno3yt-@KE^JDzf%Kl8hT(NhNHsjO8I20->ZEa{Y zh!q|-m0QO6yLtxpzDr~i4uJl5RB4tBH_Febcf7H?t5_$dxLk8>r+)cxJRpbXT_DQ@ zTFWfH@i6`ATg^{XJ#m)7iWLr(l`BK-?yK2c6fN=P3F$uZB!hIPGtF$|U%Ha8MJ{lX z9|fbhu5fQ85cFatpYeT}b>)3uvDT$}u=uS%avh(6`u)f0(dS`z<2gAVXenYe>AomP zYYP=LNr#QMuxm?Ta7b(r%~iK zQFx4E+`<+f!{8;{&#DI}B1DN~W-aLf<(HB#LmfBpEXzi3RyBl#!@~vqQO~amwx)9v z82P0#I6nsE!d=g}3+J6NcTr4=av-v z$gerLOUON0u`_Jl*VKxT7=kwh?3MUHn;`1Mc&!kaW#SB{Fu% z687*bf7xOqnUBqrSv!1N9mq7nr0>Li?^_Jzs>}1J)oUerzQ`wf!M^*%|3P(ZF6^1- zemnN9trER4QJxEHAt3=XwoQhBfA%^V=W!bXe-(L<*bMHNGNsChiP9>w8TLFj^oc*4 z*U~+XlKGCcxewCz!Z@e00VMpN1XN^8Qa^Y;`JK@Zql*+D|cUBw;URNd9n=x{4oH!6_gGmmr zFz5VaTa_Pu>@>4y=D0J`E~h*3-AOHl**b4QW;`cL=_Lj3)+?BKR&{Ppnbb5187(&B z0S#XIv9H{LD>Rr3i<&`~yTNIZX!cC&M>R*dt1IaAjP^{d!_{Cw9Dmtgi$p1qGnfRciP1}+(~AVjCOjJ#gMB~1*gqC zXMmZLNw$)_Oo>>JW`8qbpK#U>xx&TsJ~yI#c*3yJ~SY>19l zZ=2%eP}$OC-p2jXRmN zY;QOCm2uCw7M=FSZLrE*9tG8J|DpH!y+8&%=apQpXG>!(Qpgch24br9HqmC|RibZE z2x!t~2q`&v#>vF7X=4l(;gMU}yyQs5#fVv;%Po>P8erh8nd2DSE>F{j2MVQUiR4}> z{LJ=t6->3wI(BJ5o<@8Wo_)smf*eq>pjGlvUD4h~-i{tq8bzg+k^mPljd*+{IqzVa zX86dZv#tKgw$&)>X_0_GX3Xl?_9)vF<8n`ur$&ipMjq=NI*7ujoQCIA*lKKftw0MG!|Ho@VO!q$7U-2)&l{~J%i0fg zX;)d`GG>x-vyIyuvA*96fL1`^GfT8O5})$rsAtt|4>cXZ*>bT_oZRMgE_ZLoki@!9 zMIqYG&at;#AzV8N^@ZON5||+{QgxF^VHM<+mVsU%L9lJm2{82!#fWO9I}UeD-ahiY zv3F12vaZknuI7Xu)hOTQFrkKm&-sbBsY@g5?7*=0Wy{fKK8JY|hW7=g?F;^0CAmb#Bp*R{UHMY?bG0<_Or1Xr1iV_A@(eUp;@W>QXcFZl}zmCB4;36Z)A2 z%c0O0wB#+*(So%NGt9bAg)^R(RV1*`QyPh3t1rJWRqD1fy0VVc$Xk38{~AVUUQfU4 z#;+r(QecKiJuQ=WdQrgQ9D>X8*`;wF-^Y+A*PMV!Av9OgIgBtK#w)01Oh9th|fAk%EGz)5ao9~L=<-T zvVz@*Yrplj*S4m(BJF6Yym`@Zx?pQiY%@9l4Z~PLZAvAww?InUscgB0i!XC^cRw+< zYP*xq>A6AZhFK5ZPB5SXt;Qi4a7bV^VF)TgdXa$O%>mbocX#V80AGH|4*xw<63^}` z;2hst7FEUW#ol*1V0vutZA~Um;DVwGoGuc7R&VxDTkq6qFIH=SpiVrjE!8nBTds#f zUv?!U?1dF2@l|kJ+tc;hp$IbD5L#`>;7C5eKZ#8@_?}he^SS$H-)F~j1f!BkYkR5w zN&UkGhwr013v|{=Ja<^KdQp*9b9vsapS3z7Z9f?jFv@!C*>ah&B87k5Ga249J88SV z$YUp}VBQtau8GFgQc0F9fTF`RJ88r=k|#TEAd@+GOuxU8v#iI0gff+N+VBScT|5g_ zJN<*VNS@;jP#+iGGJ}M`q%uj%nY!#0()6(7OiHE-Js++pV)JFsD_RPwnJ{JdbZScl zo7tOPpHL(N6uSFVZ%eQ`)bH7Q4%S2F9d^dDsT*We>*^x7YbD|t`a$`X60N$}{0l#W zsZ(nk0Dx!2eq=UY(jFBMxj1!pFaYqpjoN(zZi~72g2_{70S9e>4M*zDqjWX%+WtuBK ziD@eHsgi0oUz`>1T^`5M98-9ME{Jw=UNpS$D^g^{Ox`x%0zEi;q~G;FM&9 zKD>SYp$hI)rYU_~278WU;tJ7~3Woz*i&jRYfGU^Rvj=B{C2sh@W}#F%A5}wo4Xy+Y zn?;&xqtFK)?X>I#!XZH=1QFl~=7$k-=0pWUHvKh5ta%q2E+s;|Ljp6zzql58^_X*a zC{b$nwLermV^GU~S8`mPk16Sosb?t$k;#!j{;edvg;loi9Fd7Whb00WW!XMc@^Oc`$HUhsZJy|+)pcg*&|hZ8?r)T#cuxqvl6XrvmP47G@Dp4MaKTE!%)VvbDrq|Jv_*% zv$~Su@!Tl-X>2$(-ygEb+Nn-uJRq?l3ceB<*&%A4kxypaA1NsI59wL?k{p(0pVc*A z!KRoqRFl88rMX5)SVpVmvPbH^N}ch1&%i!}wKrXK&~6NANj#@#bfZIFoIl@pEHkg( z-M%29E~x@Wf{R?dPWxb~vk=belENR25Jk0GhF-6Lzl!l7>)!YDE}5EJ&gmZalcZ}@ zTXeR*qG#2bkGgIR1t?z6br{RUXM7*uoYPf*z{F2+U$4Dk%~@fNU4enI-_N-qxWa`; z-K?AvbfvD0Jf`2iN;D~ut0Mtnovnj}Hg%7gW6tW6G?RpdY1L0M4_I_CVPY*@I~j$x z9n+nR$5OrO`fEZ8raF{!KG8XxQivK^%*t+8Fhtm{AMSQ=2qouJ4P7bKO(D~@YPm|j zZQ%9|gt$T_ID_i4FB;Ir3VZ0bUM2cJJ-0Uo&_a6)g%MAwly>cT#kvLHjAt9lAHR#e zpvZCJlKMf-c3%zpxM#N`xa zQG!y}(mJ~78r@;RRCWL`pF^aj#==f@=}O39Cz2_Vl+)6@q%=yWd9<4Ls%17HJ-`bSO)a=QX+M0+E4C&ISHpy^vSnw5B1GO_T>p68rAH9T*BF!iDTRWoZUKK$N zb}(jiiUcgQE}g9;M+QOY)~sBI^*g!~SmkYZ7Duh+a>5_@%;Khxj^?&MmK?_$vLqr^ zoFd2I&er+_F({26v)SV4jXYZacKPI&l5{U3#JP+6w}oi%6FL;ba#L661G7;L_{j(qv_b{x55>X~6^;y}$7-hS?W zw-SIZ&lh=~s)1rQUNd}mu#;`Fe3Wc#gUzgO(#pQVY!99L$p^$!P?WjBJYnxNn8Ec< zz0g7vLqeJ{kypEF;oHQT9_)5Y>YZo%T}-_Q%ZB4+&ZQ`iScf(%Vq)!3Y-b-5ety2Y zfK2}hQ7}k!`n*v6nj%-BfY^92Jsdc|bd~FvO$QXts?04i{4YzWJsbU^&O87ySl$&w zp<0Pm_&&^aR5=<3#R>o-BJ*Sr#AP$lkA`|B9z*e=Xl~qGCgb6lxR#a7&HNZTDIZ)y;n^vmG-cy`tKSUa)hUZ6qbu3Qrhn(w^-KyNGQIRqvGa-y?_DsE^-MB28 z5njYl9?j2+zc&lOm(x%A9W0w>Ad}ge_18q(ZS)J!X;h_R-SdG%DO(Vz2ASOeudwp7 z(30uIQwFy&PPH{}c2l7c0@_%#*XO~HCe$*`;uxVbLTj4>;XLTu03T`j#&Gl+Q%g%r zB|)B!n`pFHG;Ky_9(!qqP=0*LdYYn3&tt2m#_(2T9Fq*>0+>AGoRLojolc^X9CcEv z`L{$?sD$hveP*ky1d@W3bXJRl$_;JDAYcFLxF<(yFBs+~RDaX3s2mBuC5N$cxHYR?s7Hv2YR~~Gr4gBP zG3)#)`HmPsj*rsy@o1DNs8=-+!^f~V>=}kaG$$R61kq^Jb>?-`6+2@F(q;9yo9R35 zcqyXf=~*R5M|&@y%zCml8%(V}GkGySwVzP3nJrTtG1JIL8K$$x=$OpmnUU!AkgQX= z;C=D;&sUH2Zr;tel(!;B$>Vc64q!9r__cqWbX+AOm#V3u5_l|9%Qhi)HL4xP+=h(# zJWV)sAb896(6?=Y{x&AdrE~XrTPMJHbmsUYAofea_I*x-=Os?flohwr=nUb?ZK^H_XDZdIpgcnmNd5l8*=l2i)?83;N_YOw zR5<@~S5hwIs~k+=J(VPd92MCc64X3hqS#K@TW<6E#LSLGg_76pAEW6+1`F)?IhYK) z2gJh2lbt>b?)mCHR|aVt7pN5C_#&V@^l)IZcd$KW#NXq5k9?!f%7e}FT6(8VgZ4c- zY2e1vQYWOIO4IHRJ+D71;mDG&<1c^#3f8e@--K6bcZ+#iKT; zE^#O|R9sHhX#7gmh7kR* z>`Nl{lE#maw*fO&r-S^eJ;Aa{^s)5tG0*bV%A%I2m5eG4P$PA%{tey@iVYfyT1$eX z_GB|020pj*ZTUu-cZO1T{6q-$GVR(>aT+QdL>mBbhm#10b=23|1%t*EnsJem;+`ot(Xitf3?()zR=HofXcT9cw#x|Lxr*7vi- zQ?cCVMstv@#r8wUcl8I#c-<^{QJ~Co%;mOt6HKzku(Bm zGM6ImVIntPo(HA7(PRJuXe~n1sj6_T&s6$jyN(*4Yb|v`A4=Rs3yK~M#EyPk=&SC~ zC4bQLu-SJH6!7=OFh=sm>Voo^&pB|B)5;XeT4WOOtIvWflTyqk$uv3aW1a?JhgQGd-~R*J{^t_@ z_2SLVU4mma3XK3-)hdyZLRD%j*w;7`3@MC)V)drqGp$b4_fp;!R?gFP-6l*uHI2!8 zJ!oJ&;BafCOjnI~4UHCE7)B^D3aGXJ@_sxq7hFAG8Iir)#*DS)P~h|oDP?0&pbNq}cVp=_iC2l9vjsxk=Ozt$E z4Q`CFodE?VR^o3@SvH17A}2^bdpjSH=G%C&I?iz2 zGVV!{6qgjJ))*BpKg7l+OG$eE;q>|g7JCt+$@K9(uUb|S|6@zBX%72C4jR8yeV6O% zm9p|az-W?TdSY=tR&@YWQRigf1Ln24LrJlmgKaqL`lZCa!eGF&bweM+wNl@y9KQ z>lh5msP6HFLw8~>lEfB!nN4FA++;ry7)^b|@`eQ^>ZMix+iKmfA%uqB?eXQN zM;uXt}iw|bRt$3H&l=rkDEiS=)y*{cZw4A(+37st-C z6(q9S6BaM-&j!j$Pl~GEAQ#FYP8q!_h_r0D5~Aj+%SZq@ZKAP6A0Bf%FpI&y%VM7U zjIzY*uo>IrLPq8F5zbkc2U~h&qkW+@nM@;osXa-3hvV#QgSVUt!t@Lfa8PH6vsM>Gy*7*%eE(q4%h_(BHHf@TQA34Z_Ey=S^CIC3g1~P_?EC&z z_y6xT5$k;fl|6JkDLS$k>u%v(SlhBDV#BQ(`)9@PXfS?g9}fXWj59+4Nw4?+Xup5H zL-&!Fu&%1Jp1Jir2a3fvh7{yQVB_d+RTs_uUS0W%_L2+`C>G+xMNTq-$Pz*!w8(Hi z>%dB(`;zxY<5!#J7X=*({hcECvTAq;HO*$I*<^#JdTrv>cAgu6DdGMWt@^8Y{Tn*j zlqq!L_a!Y1Lyw=IdGhwoic>XPVjS`x7|_4I;SI8vzh*@h|F4SfZ+dy30g6fsTBz;a zZ+`nr_iull6MGYgqf9VG{I~x7Dj>byfYCx{;1K%NR{717%FqgRRr_<;|In4c48~oF zH-V5h{m+R1{njdgozfZ&U-{?M|KkOb4ru>waOi`7#k>@0iU#;J1zc7CfWZ6L0ffRx zT_Dx53l{nJ87Kft!+NQi_TM%y6gomIXdfy~@8!SWYDLP|8bpm|@&5sXsR^`?_z<7u z-|?N^P=locfJA>5=KKi}^}7pbGAnINdeO+I;lzT0BH*ZlrgJh2i3NOc?%Qa*ZU8Jz z$&7~WU*_=(5;WAwwSe_~kT-j)>fomsT{vv%pZ2{PNOMJNv(|I-jH7uqXfO_y*O(M8+m16U(exVLnbZS?NKHh(Q)Cy7Hx@>Om~0 zX$4JuW0QHI2!Lo={2rzx-pJ!lf={hjlK#|XUo=}iODbJ&>RV5 zE4DCq=uv0Todc{c#!GRA$pH$Cn?94301tzGpcpm?yQ9*uw*zo`5#xVH{nsh(y$dZ@ zr$C{tmFiY)a}+354!pd{@+&hB`H4X820>>y0JaXhD7FRm_CUI-JiEhX)hGkkFFX40 zf6j@`Js6d|sW4H<_XcLa^k;bO&1QsFZ~>gwWur^%{?eipKXO> z&{J2v4-#5C1kkwky`3~aUWVYYMl4E`LsC30MMFJZG%|Y=*FK|m&}$n2JvwT!J5qTC zYN>Os5^n8?UdsIm;sW`5<%5;cNlN+jqR{-=A}yd)KLw%WXb4k1%>i~71Xv?V{>Z*- z)Z?Y%`SC1RJ`(X)X9p{bpijeV^J?GY1uogO@!#|QRY?5i`o0h1bEP`2{cj`@px82> zZ>+>V_tprs?=y@T-57~P%apWdSEP`Sq1Pk}<35fp=M{L1GE$@JWkV`(eu%D2$#Z_V zW9YD9;P0@tM4C}`x>4A;xYx$kaanCkY7iJj_bS=*+L=-%^^HQFoT`6I^JhOK@pg35 zvY>n%Z72RsB|>^CK{QM>+m+tZ0sco28jYvY+(%L6^guz5ptsbDyKKhH*J$<<#UK-_ z$Wzt)yx0qwB}GAP$nLnevD&Dmqu>ObjdkN;W#< zmCxIFYGS@|7=rs6K8Esu3{V`QXZrEQ86>5AAww-XXMeM_w?lQwqKiT+kue`QMftK{ zD>G+EL!?#$Qa~P)acfZHimBit(!U808R$^W5XAx&QFZskMk{`GZ9rQto1zvskfAB{ zVC3P!a@TOuxhEBMWWQdUNd0+Xr$2R!X+{WlQ87<@Zq#E3eq4Hr3`G5c66{;kt50%1m!vxZuwnL${==m*Ri_lt_)1O_cQG-q`Wxe@yNwQ_eS zgK+2o_|q*O7MWxmjiRXnL^e&hGh?R_0+7v~yh9=6kN_EunKw;P14W{G>{h#Pct(JY zBHnx#HbB&K_WEkzqdoQ7NSeP!y^ErM$a>cI1bn;qk_l|0Yc>WMa>G>n?G@%FPVMN_ zIlKmsPDl8}AHfM-!5xm?2>9QN)N*&}oF@^sb78ZXDhj35tQqQPnGEEWu-@Wu+)R(A zBP={q34~Blx}P6K166oc-ya1pr&^~a<={vOdz<_>hxK2?l2`1@rs7udVJbztGv2dB zjCzeyhx~0FPvE;gdbuT398PUWSATC*4Y-jawhNC0Iol5f55OMR!OH> zwm__2VP>bl>qU|RyHq0Ahp<~W`Wzv&;EVasen&1B zq4h&mvoQf43ME{XTdqeN!5HCMBU|2O^gi z8N)3yqmY0|^0mi+>opw$AashO=p2L4dD{HP-?G3(;jg|gEYyfO z3XrLLNZPwht1cmv%LKbbq4+VqNixn_JNl`pE_I@6H7J||$N>#tgjNNtJYie!?9<5- zyIsct8YOix9E@2S+q(TgrC1`aeJ1CH=gSpd3qTlIv6-|-l8^RhW+Se{Jm`XtWR=8q z7FHbtyHHep?(+I#M@deqLst-K9xTXySs1T{Zn`RMBJ1y8%N@WlDe)ED9@$NvZ<->d zc_`yb^`rGgBijLQ;31d6jn^RRIwq505AgwxuezCT(Tn*P1=wZ$q}PN2!L0o~n(#Cm z(=PU=-vH=PwCgyI{ut+uF_IC?Qg`GP?+f27(cfrgUblmo4FWX=`Fs5xy8+wbESQY% zL8^vYAjB+Z5W(iSw$$a(a4$4VBN8V4*Xf5GcY+g&&yaCc`k#%q*S5@rr;A3aUGzP; z6Ofjt`Vn2FAr^5v1Z1WjE8CR;B2T9ft#pwG?kLtSCv~Q+x?y)Z?_Eq$rg$7@Ulq-6 zY;o36%4fNy4(2Ene^khm;SVL?%*iu!*aVXsAF{yf(Il zi>=7l+iV8iV$1vJQxDE7ireF4Hx+|IsV?1aYeHTwm%VBe+A#@_Eidwkz)Z=`b#MO3 z#!ZZgp|3iiCRNw0#b>!srG!%XwUt#PtO4Lnp6z#9bt+&igA5nhtvJ=vFai49pfDMf#7_|v+=4d>dkMDR;!2i(EEJVy@fS^ko5wTexptR-B=DsgpM1sJUkIXKKSG1h4BPges$+ZZ z2Np0?31fe9w#8*Jqe$X#h+@s(>G3)M-v8WcYg9GWzpP0ZvD8~^$66`wFhlDG*v+h!-Jp%w1d))9S_87R=TSqhw}TQ#jgnpnb81G*}I zk-q`VLR(yOdj1r=kuJe3-Xd9iJMHO?Bt4hYu#xDL9QZi$=6YdupWP8PhoC(qzM z*6nAxq5NQfd*;P=bsj=YsX3>$f4Ej?KOILk%v2N0Hq0Y$+2(6AyTKTH*2J}h49SAk z>LxA9`{E)2sPcFQcXsvLlS}f%`jXd%vJ;Y#&G$am#?(u7ZU+SomUYBX7SI$VM|VOz zPo-x~rK+EC{YHnV!gx1@w?jazTkBG8GSNL|X;R|in1SpaTP-h!NnM}^b{x+c6$iE$ zgbFJ>(C9?f4;ibXVvE{#xu-wjY=2o%>^-DBHwO*kX$0#fcyDnepEu8a?|U00p2bvc z(kstwwpu{Ag^Si)t=vKMBhcMzBJV1Z0YnjKy~{Cc7h`B$LJrqp4cS*utm@w$unklL zELp{b@s);`YUYz=#T>7#E{7o6YS=;nQ9GXG2Gv&c-rHRKHAN3gI*Y%z9lj34q};3( zC`o2`^@3ZvyKvpZIZabNTP{C`*}R`CVxTqU;GU>gEW&NizwkjLqCb64r@5nM_M-R} zpp)`g4a#^rCmRG*gzS4NBfqzWiGBZC=*k6CSBuMRlt)V;ljVYrlI(PQ$TuSToM7b9 z#j~1Hs+PPC!lZrwd^# zy`>j+1+|&|L&zP|&UBOa-fYj-TCif-njdrO7)ZlktPeSBu8-sswqqq$+I4x@N&-coQRh?L!S*}=Fi z*6R5RP`<=VnNI@Pm(2q%(n0;|06-31O3tf!DV*?GeP-%Er7(Zh#-BwuY3B1`lIAHC zN1$5o_B{!E0-@EWQ>0SvSE2{j(5)XefuYNJf8>o9v}v@zLK{Nf>pJ(Yl&ft{?tr2- zk>*6|*NhJOE{m(OEZL?6+{IUQ?6%R+%Q1E7Q&w^8ueq?h!jf=?^^bww6Is&{2v<-n z1_Hl%CIuY_z1&%xgkX8m0(l=f<9^Yx8ctFU+X3PzcWtYxdf7D28l~p1E0m8u~ z9>8H1CA$+gl%_uFUMJL(#OpxFF!LWQfGkX=UXg)!sKnZW>ti2$hH_-(*M|y7$fc4M zEO;$eZNIXA6j&SE`_56*Xu$8^8%?cz_-SpN>2(KA$N8dXL@uwR&3sRNifq?=N)Q;4-BKskPr%cDkX*uYM5om@!&n(GH| zP;oThU?#1`OhC(9RCE>LF@xT1O>a*}O8Lyh!!?i%i^%{T)CpX&6ib47XQI`M%mdZ7 zj-5HLzfHnq0PM&NqkdgFA1IwygD)u^yIk4cKPThu;*`rC&@|w$J;qn8sVrB|r<5mF z_YTEnkplFz=%UqDe5#_Ea@dhFxMK#(dCnuR{q{)}4L!x;xD7wd_@5+S+}*t`fmrX* z#KXiAnv>c9Xc7CbFIZngw0h!7MFw7?DDX`Hu*0_^F&%&7#c6^OY&z&uLCZoQy?Yl4N&2>~JRhO?8O1#!+Ngv)%iiSSjYmwcqfpic zHyD6y8ouDVGMhdQD0c-J9U~k{*mPaKvtYWf>>nks^pRn;mMJT{NSP$Xp)U@~B+ST_x1$~3=-;B>SIz@blm zd~+zDTMFHg4gBO!9Ji!kC}EoHZsN1UW$Eg91-v25hO+JSfC+SA@L}` z%bPgOAfa~T`E?Aevt$xaw_MTUw{U^|rEWv4<|-S3HqB$R@zw+ubmHf)0!{U!Ud|y% zoef{!1@&e_NHyb3hGM`m(>i-lj@0$T{q6JyA&OeNpCf_L<5Y|1{M%8zA-*Q?DAsuGv&fR+#II6Yce)@xP+<;#L1#KGng|6m z8-vSYA}sfjZ>_&%7!TSnTywcglcF^c5{yakYJbUdSWs0r%y5Lzc~Q1>-5RkKhDLEy z4Ck(ItF#xipZ<28Ai?)nD--tW8c}p)M0qk=$MeZUz6TY)Hr+gE_sA;PBGm4wsMhIM zSJXVaDEe~bvK%**Lt7?6y)lJFd0eCd5Ah4)!JMK_{;X@!u!oWPa$zAkv~uf(UyzBi zu|ua826haSWj)ImGn?P3v^VTOVD8UmS9wfTxb;1ku+F3?U+KVN7gB*uW? zDSU!P`f1UCXjatA6e9&Ec4=xb-|` zVwU?OM*a6I$?i%a9z0KYuP#foh+I;54ZNadM`%_g=aY>QT`Buci>>XA&ty5q^ zN2QojE6SYJ?vV9yxg-LriV_%(X%3?hn2H4Lx+nISe+@ zP0l&H^?sDi(Pv}r4?CU3^%tQk1*BkY(P`E?DY=ay!&Cb&QS($_lJj02M$%|7C`V9l zAXK$5DrJloXh!ZYiLHLBj}{c%;yB9FZ(3RnZ`pSon~mW0grY{J!4iXb(9S&%!DYiz z>3GNv;IcM(imfE#9K9cf1YRuuB#U^i3g0C9dbG zZ|S_35nb%m{0!yWSfyiSg$B#Yn=4KSvq5YU6-UTpcAJ1=o?=KFL4j}>kKE1?oD0qb zt`$A}c>P5x*N2IR?{Co$Df@Y%d~@&>i*>O!v{b9Cl5dep#pqhaqoSI9{1Nli9kwQ( z%(~gCRrAy8X(BIrof!y)y5Zz@pdT}di|Wy6l^A-qrm^&w8JCheCf9MyD6)tc%r!Nj zA3MelS@wx&xtG6*FDx{P@{zRDzy7=H^nDYINRy?qUUMQR%}a-!8GH$JI_rMpQpZ^> zi2H}BHzh<=un%Zt9JT{D&s*ysgUl_P*`*U!vgi{-U1!e@@6Z%Lbl}v5htD{p@y$7S zuB~UH++;(>cBR{%)4#>$g}qyM5{tUyyNSsC6Spnvsvqfh#c_O7Ro3|ngz`u(p6NEx zyX>R)v3FVmvBXT-w8pO2bLnB}vWcwZ{ha0IVmj#F4c6E*oox=tLU?_-+K0Xt}L6Q2TQJptc=PXY+S?L%SjLWT62w?G)g#ZXw;l)CU zL$?Vx@AL?5f}P6cZBDT&ePINsknOu*%;kUN_0C-jSH<*ErEZFc3(fn*hyK5baS>2J zpS;}L=X>zI{Zw!+Moxl%#%Nlo+wUN5U))nQ>=T__RgvAsfFdEM({8HP{*PB&=ic4i zZjh*dgoUhv@sQnfn-f4(yW@=%4$~z|)Bxti;e+tlU6qfKQxuv-?h4iY%nR-2n4xEQ zE7dE>Y@)`w(Utxvl@-qrYr{c8cl#EvrMePlQPbKqr^cDyXJ6eoR^0;Zbl4!YTDLlv z=nSU-LSC=rxfu#}iRUU$#<7%T%pSD{xm)04%(U-6V5tnMjiOd=p73M0b~#=*G*2)M z#P1cV@+?O^SMeHH3LsB@pEGPi;h5sLqPKOZX8_<~VwNO&604(0 z`&r|dT;I}Yu20)15VkY^P$6WVsqlH9tdR?3Wh>_&&L+$bkD^iSSR_$`!RCwE# z!N`fO_0wG2Wvg4WnydmagUNz?)&D>5ccn-N?Ze*E)#n7A2Y(E&E0_$slu&Z{5%yG{5ygCBjSmAXH^+htmM$D8q!Zfgg@ zATWc;xwcQrY`?sdm3z^BpgYsJ{c3Zx>ngTzE3U&PGfxilkpUj&^IIJ!G^wWJOMqTE zGL!b6bMWt0jXns0aeY%h;YpqxHJM4U!J(Fnw*B&Mf;kTqua)DLD!cNR?7MiHL1 zoD|!2#oxX}7Y?dxE1X7^OEZ?sdH+n2f&w}Ci}UkP)$*8QR0qq0FjX&sD(F#_>yD*P z-ft}qt*ruRXL!RqR*u`g)xPDn%}^&>oLb8!wxUmc@z`ybD&%S%uHv;$PF$|CHGI!W zWNM|9d5&UXHu4*GHiot~ zi1v9h-sWcItVm4WxLu+$GNv6R*K)A)diLT7PgO^avNyXzs@pSc%vfpI$Z=NyW;Y-< z>41MY-$v-E5%E;?!uSRDhtS~mvKOzDJ=gC^6N zNoBG(%}AJnv8Jy#W5eYtwcJ^RrG>XfP(TXpD2S2jKr!+B&9vpqBfX!P60G7nLG%2N zAE!K_nGB@{w+kdhGkarA?duZUfxliBX;U+1GW!btI8I=qJJLPk0N~(OQ(#n>%^3OB z<4}X#%CeG20D{|eI&Umrq6fgbW3fDkbJQl8x!`qmgfdjNrbl&KP(vrauj}hON045H z6^LnWOsSL})sfq9tT@|f5hmy9uBb-Q8dQsU56YYKF6~{BzN-6YDYItV29Bi(n`DlW zwXTf^Rjy%OaZCqEr8|+yvprq(V&nUcdE#=s?-V?lzZKMO%@A4Ia`y7xn4Jx6*Y2p! zgod=B-pDxfX0j(}N;pkjDunSd}XtAfSlJ^u6pdGZiQPsgo-Mq zjQ`CYDCoNg_lfv^TyRvqc|ea}=fe_q`|liOBJBLeUGIZ|2ZTI~%8WuDmsfcQa=`=v zL|j&8RSGrMEemCr zN0K*20i0&l2~E;jh=z^qx$d(gh4gTeQOWN+RpgSf3_RSj!3&RSo8KCPfKW#ok?wp~IpNV>L-zfvc>-O{X`r3Jvwi*iX49pLFB9Y!oANmNV4@uY?WKv6Y&r}FQoVcz zTmxBeR{dwY+v3ly=lxc)EZsLa&xYPV*+>tft0A6t+2}S7qOFinw#H4{f&9v;#ATcf zsIAIbFXQg(t3fbjDyUV=D>ef4L{fS72a`8f0xt7sD}z#;I}sO|9{4fkSBHa<9a(9# z+GmA@S%V(OgLccay^=mb9)Ub{3;W`OriG2kJPuQcOLY%qatJZ0Qi^_=8WTRgxePcC zRSZNhS*b3t9{Z7mi{aE6#x1&s!gpyGPn84tB!hcS!+H}VwA0&yb`gZ!7D~C9|Y-QGV-L7`Lj@tI@piB|VXzDcNxeBn` zuD+4biwSOex7H@bv4)Y6XWZ1}wU#D~A1N%a;SN%q1zOdJ!-@IqyNcUZwm5^DdU6XJ zA}aB2s!l&54;FoB89)?Fr5dP76Uxv7;qFRtUSfvBtDURH0gD8;ut|s7j1J8r9eyp} z>$-y;3oE)zoQoA+dg=O$!*-}j2KM8s?|6&sXN%IAyqEKiXDYUI7&{H6%?hTR+f}xK z(x&pkcU>;FD&=mjO_#;PXfp2y@u?Xz4~(dXiSHRF($yX=eD&YN_+wT7T;5*s4;Y43 zy6wu}qJ#hQl_R+=Go^@RC7*x4^scqW*IR5dfM_P-Fovayr`K-BzvsEBmd+Xt6dVV* z(i#m~nn=>@y^D<*`EWD$eqr%xF zIU1ptkDV9cZC!Oa^3Ex@&%!(t<9z6>FUtl!;t8DQ8qQVsk24c2VwSOlGKQR@S}i>< zau3^xMV_TSKdMwOCW+)n-i@r)jR1@tDxv3^YCKbkokDo#i4BLXy)nXrQFO7a zTU&?i@={b`LfNyGM=2Q(U&)L0qaDcy1JIg&OMp0D;Q8Fr-!qu`d@roAafluMPx0{M zf-fJzPR(_QY^R|(J6g$<{8oP0xq(QDeVvGwMjgU7uaijp{!Z3}V1hq32{Y@AQzT}Z z-~y>@-8*fiK;!g^*;oF|wqLu*U@$ssD)M_<%hYN&OVk4)7a$=}+GQiZGRC>{RmZd9^KS8LaMLGe8)zV4{CMH!h zul~39hnL8`S#5-_G%Tnehw0l`|0)#`?tg-X@`9+X8kj!?D~A@TY_Lh3)*+hpe-{{d zgd^fj_E!!$OB3Y(eg3}-u}BT}u9gyYW5VB_{m&@li-L_#a+F!|AEEr8AN~~6xr%&q z2Ic#;?ElnAU-;d-TL1SMe;jT9HyVFkLH~ap4L*O2)zL!Fnyrf3fh5Q2l!^(Hgr$m| zb36;llc@`<^LA-zbqlRWe;taSIu8|nU-`ZuRrt@nt}$Wb|K09C zn?Sg)Vh;^XpW8E$^rL;g+`Ban8!=yJJxpsBBZ9ywb~TuCb9k|TL?Tp>P~*50exBc7 z=+*`Iw+-I3j&t8h^~yaJ?p7Me_&~Q29E1<$tDj8mUmbH2U5zR?hF5|kvt?kTus)`8 zr7JG7vGJz<2GY23vpQYtcr}e}34BiMv}$FaTF(rCIQt*{58=L1(i;{nwG~pfA5DLH z*R(laI#_0Gxzd|AUfN;M6U&qD=5~GQoGqQw;2Z)`Suya(caXR}mCr{dD;rHJmkTTo-)S3Tg>O3{%|saT)cKGD|pwer=qn2ptR)wbzQ= z51q7c1X{Dkt_Se#a?qRR*E_6(@a^83c7u7PE=S38HoLLi{o=29{`l7)4(oSrp68eL z^tt7L0(0Y*5LPL}6kGn}f;?-Fi$CS&pBL10M7t#m4h28EpQ|w!Z(1Psy<$AiZFcr* zr|{q!v5ioZXT<6IhW5>l+iF}Ec}9}M6f6GT2Y~@^SY(?!D{}&<=84lhf1e*Q1eNaw zq`MkoKl+xVsneNXAq9HT4nP6)T+T;4J;^N_4|+H$g&NE%w~p67BlqMUzy3t{>zh)i z5fJLi5(?yGe-4F z=p<_SV7Z?g&`XhWcQG<-Wpb-z!(uHnVMiqE>T=ZiCId&iYnF4fNRMOjeapZ{!63le zs1X%*JGFKfrvc#3uUYCA^-M|D%k1oT0H@n8TJr6tcmU4oKn5z`AD^HgL?l#F_VntQ ztx*gao93J9sES^Q?pBQX{;{fIQuxPUkmZ*(?J&zk7$i%#lH}+R&+F+ya8@GJ=y7%` zPG28;PCMQMfBBmoZR{mw8a9z`Ar%o#CkFLn6jKB52CkLs2IjZ6z6-nd@lU6`@yH3J zQjW`cX=SiEc290px1*Rm#i{P`yMXV{V8=^)d6F`vQ)cE~rwQdl;O-G561X0QGrJyd z!mVun{rJzTd(+g}fk)+a+p`?mR5)n*;co>}e@@^Z7hY;OK+(1M3qSuOvjMGXivLG9 zn%3_>WHtM;UX|~9Ylz&=B}ZPg7E%@ z3H+|{AMd>Hw1@H!QxATpdJ0z96Oq<`zT>VI7tEi7_UnZg((U3wkbCy)5C83s7f$aV z`Rpe9Z{9Ob>{37NJ^!)%6}9G& zva@X#7_4v9ycoF9q9z@ef$QRe<#A1u=X;`Qit!O!j} zl33kvW5~OSfx^PcEPRIvcF80TQcb&S=Zu!6o$=Dw8J+YEpX#(HR>yOdjR$|KX8#=t zzBf%`Kv~GUn=JlAPV%{8tPa;Y2F`Wm#W4+Doy6lq9o=I$4kOULh&x&VJCyM*bMn9g%-Nj`&J8K|4E7^>$_WQ5f!wZ$3Xm?<~ zRt4rqb4|6-?&pV+>X=?Jz@%Po5>L@JDcN$I%1(UM4Fx0bM%-Ga--%RbR?!+`T*-Yx zrOj!yt7d!{do)j~a#w zJ#P*dF~YLQ30#?1;*XEZgsNw!@$0Y8mr^_P*>0%0nT+Ko><(xLDq4-=@3Bri=f0eS zG422^DpxPsU+0@I1i`K#+`)wG=f3arZTjqoJg|tPEx{C&-9}u0HKOj!8`@gEJmA2& z87n=_G9P;Q%?B-$z0@!(Hl7BWExcmtwZ`@F-*g@sQ)r>OA+<15`X6up^rk5u25wP~ zc&=n7_Sq8pN^b(B@&YH#@af;ah!zQ)TDpa%Og|n}LG(hXvD?;QGja&4pB>c~1wjI% z{VJco4O%ETatHcdRW5#>eSWa7d*c=9tUqs6X0!(3Ku*N*24_5K%hnQp6>2i_K$5cz zPveUJ*iG>dRM5HBKtMs939vRfx7z6fwgpLm9xXvHot&6icivG_!YTZ#)c!>2Rpb1^ zB&Yq4w;(WM$1*MryBQ^YXu(aP+HnLbW8Kv{wS9BU6&eSH{mx9i8yD;O;nt+iEb4z{ z?$5zUjR7a1Np@v<^p9_4CfVH_jNFfVMIw|=!sDXAaax3y*=(909TuI5J$dp5Evm3# zK0bFaalBN1XU2U1G37bwR_zUxaqr1iWa#x6;+h_LQX(A6W0fA$zt596=zaOnxsDEIi;N^E5TD91Fxb**zz4wl4a%;jz6;V+SDvBT|<$&}i(xijZtI|6PA{|tk z)Cg9(AP_nTgd$CPFP4CU)BurC1p7SO z@MSqUE>pa=cyeL23(nPsGeEN4Z0EB57ErxC@x|{CZDpsFia@0c+ita~k^gPONn8n6 z8GokY)4}z8W~J;n28m1Ro9o)Ty`OG>P{;s!ohrqH`c2`p<7*&Cnk+dlE>qoG9CUf_ z%~9+~26mYgrcGp+oeoq-p}a#hn}#9dI3xntEiF?H`-7v5Qk_Sna-8{lF#vg#J`E|2($GV(s!`yo`ISSwZ>lnX^j{l~()q#9RMd^S0 z*D-hiGU(BAw&&U(_4fmh-<-aV$_lHqBKohBZp40gKqx<@$ClZXogDWrJ z2dn?_3VY8uassK7S$q6nO$9rmNAv&rW8vzE=-z)1K`tUd?x}Dt$x7OPlhZc-GCK(> z`=~_c6?icr+_P%p-pHsQ`>0C7ciiqxt5D2utC@WeEF$gn9d)9+z<=g!ti&cfO)D5! zr*PyLoMV@7*Lay(_@@x}XFV(iiy{Ow0Bj=>SRL1skn!N<-d7qBVS4VYiTxk5eIY z?nEqUS&+D5?;jhyKZP|G@Lly=s*eA9^*E`(B*)rs3hlqj?`J#J7u>AU!d)%ne|;hd zT+TO@E;^yVZNop?Xbx6!KDVLg_l^4bM79OkG0ugHy3GIag1>wn3JNxVdxo#wnAf%lY}gu^i{lPJ>bvLE(GW&p($;N28I@U62Ti{QfJM=jS$1$ogp3 z;guU(9~d+W4K56%eF@Sn2t6?~T<5*j<_t-Z+FjB{ZD_Pn6Br0fn}b=3biO7lafyHb zcG!)0z;(5|Lj8RU{aeTqJ$B)k;ca>j$U^eQt?J!p#)yiB6?_jU>(MfpaSb*HTydUv zKI$1eBWBJ>$eAs=qEqA$m;fIT1|L};_%DC0_VipH!mA5`Q z_2KP|xRSd<*=RSU*(+hxy>ZtMEv9=oUxy;clVmU8grE@!N%QUh9}w z^gh%6M=BDeA!b|s9e&-TUxVfZ-8y2T<kv%LcMJbz5b3&wmcZ-k#}^0X%iip!mD$ z^)Em3Vzepg?1Iu!hSgm`&*cf9g#2<8>(SdP>md=${8qYZ%2X+>ybZ$hMa^a~#HM&= zS@lZ6_6my5V+b!(J^Ef1@$7=5Nqb43kv$AS@Y17&G)szVsvV?4ehl6?bP_a{BL+{f zghl9TS9+LoMR-bVTF5E7rHIub?C#|h@fGY;OlwdvGoh%NE@k$NTaeo=-km8c4w91* z1r56~rJq?7(DUjdLlRRB(`j7Ge%e+|fqXvST0L9J*tuq?ai%V9)yrc(DN?>JqWnC@ z=S}_KW&PvH-V7QLQT&%6ZYl=EBU3jVIY&!~W+`a!`R@L9qCzsqHl`a|Mo<@iRfNvof=Dq3Er9id((haW$a-O2gz0br@fohCW$5F z=dZrKX!62^f6Fqz4B0*v&nXaaj^YzXXk;KqOIkg(`2*dX4gQ58kK`L^BAroo_8=H=i=ts}czQ>BQo z@oqLvQI4dC@A+*P2CL9VW+ZS#ikip-XT?!rAY6| zZcSrAdf$D(FQVgDw!)Rsc+t98mcbVWg&unvJl&u%BL z*RuH%yrs=%s%a2eS!XpRgy~#XpRahaMBGk^CaaY@J*Azp<={lv-JVaX-*`$;Iyy(F z7pyTzx0?9m7R;zHWE11q+~3yD8+HuiMVy_I z-^^>L>^$81Vl5TnRKIEN`B7u_12M3%Qns2rUY_p)UY))M8`Ubl%>e1Ja;U>q?%;b1 z{RHG6%@>HXO?W|863o>Zwnk+)2UjtO7XM_j8}ilYn1tmDhTm)w88p$AZk@9Sg>O*! z8`Q7_Fh_V%r86;+Ge|K(8gPBa7?%`+*46_ct~fekambD-ddN=Kj^v$BVwgD>UbO_r zdQUOHR9i0EC|amaUtSaFkVx_aQ4FL7%{^4LGTT%-K9h<>z@o zVG@^IZ1g3XzDm*?yB;l`7GC*%8TfVaw~3M@^o0$(Mq+qA2>hzeCgi(qrbLsUeP4qYpBC$SLf*#_xY?^q~xg21JKJn91 z-`5E|RMV)j#4A6nuv|IDzHcvVOPC~=CdNHEbI(6LV>@r*Re0!B-i3EBPf<#Ht-rbO zbP34q28mwXJj@wHIDSUIfh+TKe0-SXB=aWBM@^QBL&21rPrlPiUW-G90(p{lXCa=Q zhr>SDVkkn1KY*1)4Ymq0Fwr+J+ON*)i}g&bhtMEamLhe>)gW7lk!3+g*mxFxE(!1{=X z^QJ+Mkb=U#RkUwj$e@AI!0G%(?z>e1%ph|-N~9Gz;(?XiJnTt&i0S;o=ln*AzqQH_(Ne5uQ> zC8e4@pg1Ze#yW9lehU(^Q9-1@tA!)(b|eqQkHxfX@A|H@lkj185mRj@R`Jp8i ze`w60fWr1i$rMh7-X2ZrS>lzr6~gNE^HP!7yGc?s#-H`1YWN_zE1 z^+L7%a#d}n%F?o=*-mpc59~nWrwC>j#&2cF%t;?(ha%l%CocCf7$Ho2Au+sgI{fwQ zUv0Af280Pj%BD$GY})TeBM0%ACd`UMA-S!_p-La& z^6_~%H&qRU4ZnqK_|z2T!&kq7^~n|#G753$gcVteeChzb3N2L1mH3B@GGgy)?7IT?aVjU@;zEkAUcCd$ zk88$r`h}1o5sIG_j4E!nSTNstZ>>^;p_V43tfn)h#4DPcc=qCKy^6AZKpT;pr&9kvcdJSSf9Y9$m9oQql)?``JA#H1B!jzHBBbt(N^N?} z`-O1y(Z}7J^4%Q1nWDsxV#yQ@cD3Yp)F(Z_^vQ)7_y*@njfnev3UZ%J>ZNhHpQBFA`>LwfqD7A~^t5+j~yMeMPE1gRA&eHVVDzR;RbP)6M`pJ%G zx6M=%ob^P;?(WuZ!mii<<~Hy@e*FBBmaLlm_fzbD?%azpeP@+T z`U`sS9=;4?JhEj}6*BC*HF1UBclDb@Xy5y%*n$&U^-GbVa8-rp+(wnF>=9(ka*r;d zPBU`>HB{uf+#JqFeSg<;DvJ6|xa~)akBU7F_v83U(4Uy4W9fYs{UxqtQM$%SIU0#! zK8`=ALSGxO88;r;vGdwQf9}VYq~Na=>9sd`-ud+I=)8XYcx>0~R0NN9mWFc5&V+UM zQgezBDcEXdlSt3WG~M5qqC}GFLsY0H02)b~8^4h4n;w7l?Sd`bnvz z8;_yRlT+Hhr>mI4PCq|(wkgY?b!wbJnuX45`Y57mYL^-Lt>1#SLd$Dm(|Tp{bIOJe z=B4v*{*}}KlB7QRvd?43W3t%4|Lq1HiTnY^fCEbp8VI9*+qN{Myb1HtO{L<0aH)^R zb2*0%;27>Y4doEmzz$H$M3>&Jd!rmgwA3t-BdKH$2F=Y>VfT~K^CRv9rW^qx6_mkM ziZ|G)JV2JRcLI+yK-adNqwg>nE2V7H+Lgeb1I&0D2a+ z&Ifi0;}DYTQi*0>7`W^7-$y4RBq%QDN(aFFhHVtp=Beyg)DqU*@!G16e6z`_R%T89 zH`DuC(}QEmn%bwkEFUoRoLb;XSIXMgmHQ4WJtBvOWY5=Sqewz`by16_?#A(25N6}S zoJfIq7?s-Pm>c(4mRQ6W2((Lx54j>jF|CBn&GO2pSAIXP?!33&6uA%kzxm-R&l3Zj zoP6Wh-DI!e!NTgAnzQ@r=!do0Nl#i`{#@k^^WN`o_>w@+*?0YQK}y2ZW4^anze+f- z3tY)eW9O&ewbU!KegNhYoRa1+gMPXU*t@L3@B6S_F}WBtGR z0f4(ui0B5My+8l2-Yyi85YlFIPVHaa!12IgmXE7k`ym;BgMQ$R9RVSv`)=v+J^Ij} zeG~ya_w{&8u7rCGYPuc-FN`ezybf6s-;5vY2;Ukl)Ww2=QJE2gvT8k+st zzRL?pevnn6JR3g52vYGy>Xx?b0SeX31QFqijgv&u1_Bi;5VjdW+_R`9SpNHeASJ5Z*$D3lA z8sD}$)nq+q@VY!Cva}F{oOsJD6$x@7<9MqU0|c$t=7ON+7?SP6s=H##vEy1(?QV}> zrAw#y&E)9eW}k;iaGDc{EDq2QJ*!*S#L4q|pCKFF)@$bOT2B{*xgP=2+QzqkZ-2D?N_^{F1E_py1r#s|E0ZL$24L~cN}_j@|>cHQxIy@8NcX7O@* z5vQkz_+~@tU%B%Ji0qR?(05N>6wE&F$uEdiOm3$4tn?``5)mG9Sh*+|CuDy&{^3-T zXOEJ0x?a4!OIdlY5NHbFXdn8x_Mck`?rG;PY2H%z?v`5U_ob3~MY3cemFWsd3j;YW z{B4MM`;$roP7T+$zBX&N52%~zupGr+gcv8EvERwBD7Y7DVVD1dN@91T=>o_SO7GX_M0r_B!41=2G=>cf zqx0*&~h zGu1qWh9Fo;+sP0bQS(9TMd0rgcd8M}7jQ5@J(zXW>KKc^P*^97TR@1z#rO)4<>!z_-t#^w1|R)Yx|+5)GC6~{o|>0>BibsI zW~tVJN7&D6r%N&BN-o3^KaU>b+Oh=10q(;2;tv|)prPdPy^cS44*g59BK2v8`)V=i3m6A0; zUdW)AO zRxth2ZOrgO!ZUJ)WQa#HPiC9 z14qTsan)%LhoHxY6yQrM zyZ80@)3P*!8dW&*MQ_~-(#+6Gy_4Wr(dt;azEDF;YGxtVIgLZ|na00oT%eJCv?Tv1 zJ4%7_V9Hhc;NepYQWiPYa&J@b-^g`!#MH2?2aU?BozhpKI2|O2j@AbsC+;!LHiN4Y ztr2Op3PVjr%s>i-oQW9?`SHSvv+0de_deZ5GgGLUc=UML<~o)uTkmWFw-XO0_fu^j z>iPrm5ib`^qpIIzkhO3?cu8=VO+*Y%{PfiNSML<2mnDV{law9f*shD+Jh_px|bH;mvjJzSOZh` zO%)0U_QR%5b(`jxIOSIlUfvn7c=GuO+`PNlyi*hOJ}pHQ)Sw^1bK$`R`+`dDHXa3K zhhog#O)F=P>F$d26e|(wvjM#Poa1W=&KiZ(kf#jk*N&WnM3`@>?snHPPoFvX+MwZ5 z{gep*a$1x~{WuF=9R4XX!KqRz+%-@hnZ7__(qH?qia=y?sq$4{o-aafgDhJMNoY5j zzpXR!@Ut1HK5Ez!yAC*IX-`ieaPp}bm<6SJ$k%?~!a76@&>VcN<1kM=Wx5^HJ;B|+ zgq2Uwp|!7EM}y0~vEERchqjnHdf(}H78CDtF{#&2bu949qrMmlAV6w{UIVQyV6^c# zxsaODdrw4I+-d(H`nKGS;z>}10)N0%91}Zr4y@z)V-@cHz+19|%nA(ZmH6`7uyzzY zJ5=(-$b$T6fnWU1kT*e$MtK!0i-%5Shy*{94G6fgGD`Y#nf}i(bC~MLd*NI3QoW71WA20oT!w zu;@`KWibnAcGU?z#K-T5>bKs_m#~{GH+E7XB8Qow8YlG_6Q#4#MqY)n)!e8v+S+a) zg+_B0BvMv$WY17vVC8XiA$4Xte~u%-mS0{!gf{Rd2dq(5HRbu18@r=32#k5IlqWc= zz^pCrQgjA#be|L-`v7wQ5jxxLajc(lvKQCMVE~Z^q5NBy6sRP1)V9m=%H;>3`U)=o z&{$!GsXtMZGvePCBx9A4t$AK9URMHvLplbHv?<-s zqvWn$s`Swb0>7hkd)#M-2AIIH} zZ_yW{;%m12sES_&4W%ZaxVm7sTY8t^s=H(ka^9-4P%Tk@lr^B>cFcbKNNIt?(lL9c zNx>)&1}GIZDf?LTD2|t=t=pNj2Q3h`pH5vcp-qP}EuO&}s_^*It8A){fTyl1q-KL6 z21%;6w!v5Rjj)&cuFzC1RCCmC7aFeD!gXADl(~!)LRf>g>3lZ6!Pz6XyM5tC^0;je zU7YUti2s~HNPVDtbO3KC9n%e}4ugrn2-77F7@3$tN=)WgF1qzSPVv5xm8%G1fVj$< zJu;Yq70Hs8NOz*B+pOA0)wPF)^cioy8_Ev)e zPU#+D!C5}36>=nqQehi1I?7st1Htdl#M<5eVUaez0pvZyvf+ijye++|#;prIJe~CR z)^b4{sVW?O6YO=XymZ0XErjuq&p5fQ1;69hHMr6F3skcWAI*muR*KGc(_|4u!(#&} z4)IIsgUs9+>aNTI7<3J%m{L0VtOQDWJZ)GVX0XkyWlwTkJ^F039=a|*_d{`5#hPcf zdEa{`1Bx8|6N>b3qbu8U6_9#X1zNbEB|pAD89TsMs2!=vJ=NS2MZuiN%bxkc*#6T? z%Go@*%a&kuCat#^taseXaJ`BBZVj(mk4r=cJn4FNo*$tTzoRuomY3Ns`fEa3u0|@R zf;@L&ci{(Rqx7{*h&u-!=3{$2{d^k`VuQ6d_|{i!)O_{khEC*62PlFEM(z*fW73;_ zxeINJZx*F`ETD`)EEuu!y3&;#^MY)d1!8y z_AIfU{Mh3eu|;LijhpUb4zILJoQJb$H@Wd7BcBIh7A>BH{mN45<#@ZAf$cTaEc@Xy zH{v*r2!N}im7MEvcRl~ceGPY$Vw#6^%I0_%g(>58j2=mZ!LN<`Lb;Xbx0fRQP{wY- zbB7B?v-mS;nMNL|yWfoq==jYFq zZ__RMgZ?2|9(6+AF=Ci~n|se1&C~MtAV8-$>+%zR1ZtwGUSN`$I=j=Y!!5q8)}yH# z5LxP~`X&$VKOPvm*Vc>49uMna*4Nnh$bqS{PX3%O_3Y$@WWp9zqP*+$sRlAB#^JVr!co4ha4)NHYL|LX!j6Wvl*dUtKr zO81kc3L$YjbdV~q%5%uZr&=fVyv-$44~MADS-wuoOCNMW^-f-ID-qO;S-zu-%e>9! zmBq;mI0b4R4)(+aE`>rlM()0YM&6&qJiW^Qp3OttB8PVSTKcBq6xQ1iKKZX!R=;#; z+t$Tt)o(ZzIOcPt`+1HBi8ZZS@LLU^=#Wg9rD4vpC~X(V*1kgW2JPfya>*o~%0}HE z3}omy*yaysFNrAd=nty&5wc#5~VQsyG z*2EM)yc@ouM1R};{pYNVsbKeyWU&#m1;&H9s#~pC2oyU|>F{f^=v%wPA^#!qP-={3 z%T4CZ%{qol`a4U=9S_Y)<|JGN0%}w$u5erK9vTN?Wz^;TVt2>YMOtN{=kagvy?Z}4 zXpeaY;%z5_pxv`!MtqK@+;a4M{+3TZ^y-_$D0W5By7bFaQz1emUr|x>C0N>(#JcAx zh9|?{qcta?gD{^q-)7YrCG~ufv+Kdta*S*W3UZw+YkU&xb4U2%#-Hxw7S<-V2FkQ7 z0;9Q;1r(Kk3=i-2wUAym-|9HAqB?)xjp7J}YoJW;VQWIVzageNb4H)j+i)bmNOL*YZe>WTWWF1{|TOHjn6DG*5$wI8g2b;z$GLg82? zAH_v>KWQ;)wBp+(oSugEdP^(c)By|#4!DfEe)(EdRU1pS_-wY|GcK5;d(6WTSnr;u zNgGLY#u61EnIsxLp={9d71UH79g zm=`I_U9}W5LXXw}h=wX0n6S-`A=Qa#=?OUW_(-8{$f&4Tq2da3gkT;d_OVFcEG&ZW zL_E*m%edKwyYYHzhtBc!8b?|@c|Xczb8fbmJ9jQ7s;pMR@qXbsTNf9c!{?RGo-ztG zn%X|2e!k>%D_CqM@74o5e#yb*+5~%J`xYA$C|Tu4FHVZa1bsD8q=}uaIedkFJ`Rw_ z?CkE`a6yZBNT7rt&X)xXg$%2eSPS6^%wx&+)alR0Xu^GX$oqsmG@Aja>r6fj;-Igx z6p{dAd2R47w;${#F=F1`(HS*$B4g0s3!p&UWz^K&alnXmcxtpy944{{``)+w_>2+P zEvbOMpwIhh0{^)Kds5Fv0k{Nd&OiT-CVv%#KWzN`bJHl)iHUuVRo-#`iV-h?A=eK$ z+qJn&WTnA5w|l=rR4|H+JqYa^aj{D8Iq2W2kXm%%ni+iWL=U#A??beo9=GT-AQS?- z^t-$ApXYi&4y|gNG|7Q~e)Wq7;()&jF)L6-OZ4Y={`P}R8!@GHhxx$2g-T9mM}aZ^ zHwmzA|L?4XYrf~f1K)S)me4U*?64$mz9!yjq5Q|j`JZGuO~~A%*Mf7|OUy7)uY1i) z{4+>=T8eB84v3$55c+#??AMA*TcoU-enr_mX~0RoRQB+G5FwZlv-5&{KLuANRoP#I z15FrH%<2)Boc;a7^1l>@|1kLu z`2TM?{vTeByu3zp4QYWP7wiX2T5zF3e_DLqE|Cd&7VF6lO4?;5w2hPg5C%UrPO2Vh zb)U-hvRB_lhfYGH=ahLOXx!@Xsy7~_T)H;yV(phr>`$)E^$P$}JqO&w7)CDq_40@% zwr*V+6lcL2xg}^qCS5RJEu~|c!W$%gH>IUV=E^9(VnC9@)^;33wP6R{K|4{iUaz>k z3-LtasQ?!49cywJkl0M;uir8Klx=FZ>#mA)YZ#md-^-b(o6ok3`*03x9*hFi>lUTnZLk zwwqmN z{nnj=tZH`QXiEyna6wM=3S?^XWwwfaXJBjFlE?bSkeZc$wP-EvIBOhdL|sC_|-m zN-)KR*#dD+$J!-UPe@_i8i)A`kwsUxvpi8tSZ?J5Im_9t+U;-efCP*MjZ+dgs3$hx ztv9@Cm&B&|+})lZNig$vw4?Vwdvw=pE2YIh4kXVr(fMsq*oOMymj!J~?zWj|86nJD z-?$+3b=GW1%|Hptp?e?PFk#|Og^*9bRnP+kfGj{Z{Lfk~x8S5^zPRX8yt{FXv^chO z{=tcO{&c5Hs%nIDwktg4pGJzzzGNtCqUZZ|vLS7$?;axxR2AyvXpp8J>Q&jsPE|Rz zb>kZ1&~?scLCM6dQvUqH1L#&)Mzb^uR5r3tjO>~$xsG}F_GLLwF43o zG7EQ+&>8EZU!A;Qk+VU9zN)7w4nWgVQRaO9U8?6P_&;X=oP%{fUksNg&#SUNBd*Yk zMNXR!$3uzS+f9qEO`-@#BZ!o1!TV#T=$lMz$@d?~k4oyEMfA zZP>N9S0oeAFa?b@NcmX@Jz3A%jpW2pQ~He#jEOq&%esQJJIx?u}#i(bPoK9~M3EKMPJ9`N58{VX9* zQ_L;3cWVjege%yBq9$4lG54c8XJ-fub!4}TL1g60Ygzg~ieLH15aF_%GX1`o20`t< zH{wmStZa~-CzQ-Lncz5}r__kmXovao??-w7=wDl;pf=T>q;31A;FYwWk7#QSVP)KP zL6o3`tzFRwYLo-fj8A-iOC_#r{Sdw><3y^d`ATzp#O74eD^P~PDZbRK&I<-AUBbvh z1T~8MTKStV8n$1+g$aTRZA_`l!LFZKsDGBySz{TAen7 zApg8#^503G%0mN@R94VE%cS(IKPXR1#>&l0at43&l(P2j(AMFp0cBO_n@PZ()EcSx zMSy0Kl|5Bg?*UbWj$o=7e4xG6glvcebWM|!opPtst4hCt@B*mr0h>yJ6jX6@x%vD- z@6=mg7;;Hslf>(It%Qoi#37acI5=;prmNpLPy6i&G?{_p65DeEJk_#*VS20e$+2kwJjyFm9hx3PRVo(K$8g7HmOFhhs^w; zQ^ZP%r9NHP?JoWl%)kBcmJ@g`^kC)Qe|&b|_dsU^^wM>Y!lr+HBA1wdrfXM2{o8l{ z1f`v?!65%1nMQR%OG4F;H@U%f+2Jc0^60kDLs|Ob~FhL9hm-WQ%z=ZXrx_RD3p_eNLdN)S z|J-?bz#sZoK*sqWANc;M-~V~kc%tL~^d*chWl2cQu8>7q?2Y>S=ZP}`za!z3IP~u` zVSFTWz(?xNN1(j| zZ}g0ieP$&M`RM8bQX0OShq{^X<hB@AwFI z>+5uiQQ7R*h#S3jJulAJighJcn*V-0!k){tc#4NL>sB54ac%Cz$>emyU`kzST4{GU z43`F+|HiSK*VkU;Cs3ctdAM{YlF!`FuzX_sMQ@G?ENJd)z3CcugkfnTu(ZSi)*{>( z*&uVUzs|eDV>Z}tSGh$$n{lArDcaY2-D|u5kE`PUK;BVQdRQ4G6CVfwPx41Ywc%*ajr%$+4Z>DFZ{RY%?Bi8e0K zFerjkH@AO1tEYhHaM+RCyCx08ygG$PA54B>nb(}Xv*IxL_Fc$TL+*uUQCXlih={Qi1Vhn^lQEyKaozG5a zOAy}DTq)Nqa0|M48LcR0mbK88`X#M1Rjn%xo{?FVsd3YNy7O!8)#E>x%5AM`|!NZepaDX*wKnm@4JlmIFS(_m#khF6nFD#g0NGR z;fN2*_7{GrtB$g|wmIGHVq^$+j&xs0&W zhW2Df0++WBv!00`xnOUet}3VAn@i%L&#|8fb979sH`4oN)_b<3J0;}~LwfEDI|F_5 z1EM8$?uZ-|RlM*rTb&;+o^9a4iF2IXT>3k_tx)cWhF#RR4%RZweEXD8LxK8;Gb}19 zricRc7rb5lrpwf`((t~l0?*xYr+y2box^1gLki0ipP5Un+ZXfdiW@eKTJ-bSL*oU7 zs5a3nn$oxwS{vgOHs1yN^I4I11(Bewjw9WwryIw#X8_%}vGr zV$YHZp%gYpYg0?5ft_Y7b<2z{*AVQX{I56rs{diK+Qi1tj(ldK^0>q2-wkX1l=}LE zoML{Z&S$k|*gDuHbF`(3w%w9w$j{ph z)$jPkL-j>USMV83@)H&4*6V9EBGpoDpn~!Qwqf@{joQns^2GkS=j&!7uEz_u$S-`( z=UgN3wW?9i%Xl0;j#{(!l_F(p&FlxgD(}B^INZvvb82B z)y>gsJK`T-VK;v=PNw~{yRjmV?I^r#0QBs=PQtgg1&y7hK3V6-wc?`|zxbM5uIkza zK|5}%mKY&KJ-S5pe8q@ZtXGJyZ)wu9`Z|=uxLwR)ApXi?TJlWVK}Hd}crN1st-|^4 znc~l6N@nZLIV`atoWIYhMZ*N*fxHqt6S_pb@wI zg~5&O!Un7N8|xX`&;obKm$18+g2UoFOJ}i3?jL2R3ar15e)xFmRc2Oq&YK-&q>1P) zh5J7Ij&JBNT0^e29&>~6Bll=y8$X=7>#8JmRxk(o_O)c2n;bQdo*K$++D0M8XZGBb z7+w|1F+T5dLF~*mL?rw*vyrQL1e5%{Qj)whAX}``4KvKzrY#v#(G&2~NxYKEtyn_~4yS?&g3M+!o zd}5xTMhZHwaT_|Ia^D_{alP!A%Y45)q0HDd<4e_)`K`x?P`qMi(4FtDuO-?*a2M_s zkyp>5%Cx28HX>+PO0bn;B#q#g0C*ap_a z*lw2&T37A#!c0vO=8EH9>w`7s5b+?Dy7l;uLEEepCq?%%r-9@~Rc>-cvtKkEW5RKO zu`cod6Vuz!{q^>@^Y#SQmT)%a#eZjMS{`` zHCoiqxlerh_7cBP$z{2k4Ru(QA0aFw!rkR$cmz;#S^S{o0l#+n5o*^N-n!&Wx`B$; zJBW8!O_Z%~P~5V8dmhd1ko$I&nA_ZDu6}{qeQwWr=tQIH3RS*Cy-=?HXX{sNo>LnY z31tqp>$7ADo_7Ubu;!1{j5oD3zPk|%^&B_EsqL`yC)&Hi$+ zQPs#X)hdcdfA~m!Y0^&m6V_TnfysRwnfRM)hiJWO>yJILiC^l4x^r?3_1S(H7I1S{ z#>(cq={!54>Kb3z6WM2c=mBbYeE8u|!j<`4OiHjuZ|1vatZnhFQOakp2X#G)8PQg( zu(^@_B(87P0C8J@XRwC#YlgNvJl^$=>V1L3iNhU7vGtCbQUiB_JIn9SzZ&*F>8wc? zouNIAv9|qUpFUse?8ToHr1{uuM&;VCa#n`Xc9`_y58%xB?Xa-(@Wf%$)w?F0*B>)( zu`x#;Ipq2=SUTUZtfjt};$Xbs)z@?gTC($3?_^iv4CZ7<%*9ZX>8FN8n1zKm*5ZZk zeqleG)NT8i#`If&?CxK*6J1^#ig%*8lX^C#J^IujeJ>r8gjHUJU0h-#KaTlyR$cc? z)a`E^K0_zzaIclKoK9%UqEg?)MXJ0ya^p>WnT;#z3SPTEPc&USz_z4T2NIfWU=66G7r|^364>VYu^XLeh;Jt4o zd=UZXY3|S3x3Qm9r@eF!Q+MC_^QJFvDZ_psTgs|U%S*>p2LR;@O`XHzGOoIoNYChrFEk=vEpj>x*w(V9MHF4LKs z-r|iQVH6XJ%Y+3}d&@qd#QAgwZg!=^pe3`Z>g%4GJqW+vzj#3LZjo^^!!*@Dtp12^2hv)a4%bb+Kid?+D;89Vs^HE;BN!aI_w&mRK<8L^ilz>SfkUmRRe*2g=*aro1fo*nV95 zEL^KwEV(QH`P9UWmsSK*ceXO(B~xy1_U#llD4W27=tMy6?{<14+2!8z;Rp9_F&S)mU*#c)+{&Cp(Ex*c$@SrU!~V>xp@nB(Uo`r42yt8ki~E`KFumhduw z^{sCGwZr{oE?Kq0Gb{<311gXC9mQRvT6A;c8@1>QLl(WimUverEr3mK@5bdVXjS-K z@3#+-5HnIbBrW#9b|EkBTh?K(Li@StY-%3sP8c+>pawMaQ|5b%K2;0T9nDl%p2ban zN(2^hOVEg4iDPf6ysGz%y8!3RK$XiJ(>kylUJB2nj1j1;WX|flnmJm^?>KzTGJVBZ z5DTLD+oM;16}*jsY>h9VXA9e^EQuH1-+^^h0aA&{?5>ui<4>c*#{(aMJ#|#!Jy%o? zXrTQyf&q2kY-PgeK`~AAnCA0SbdkWE3k*~1V)Q>^x{CK6Z}LjN!3q(4cSOoHD6bkf zn;=c@&|G8Yvi$Lw+5PRcg|dNxa_5ybOsQ3k2evo~Gu)nj!EJ`+uocr`v$go*rn=2) zhoNG40q!+3-CU_9>kR?#CXvi9L&7gsL2}R6tw|yKIGf8#k9oz4@61GLUd!ZLF(266 zT7T+XBOyC5`uyog|7fsAMCQR)z)Daqn-R+UW_MQZzVhSC43G_fB4j%bj58to-eOs5 zNoSO`wu$iOW;4T_ugxu$0r>%dHR+^$K23^>)g(d*0B=?xkrYDl6-}Q(UqkZjx^2A& z#$IfsgPq%iK~m9G-PyE*X^D5|qczzyBimTn&#HFk^p!fU8ajNf43yM;tjFt!Q-5B5 zZ@1zK0GHyDra9eK4g)hU7)|YM7V-?&q6`*9`_^3D#?JjFi4LCobhF%OC?jxqY0q!K zD1Iz9%zS=F(jqgQQ?EMF!Jv-2V4(oC%`8TQ7<{dA(U0j>ReVm#qi5nCi+Sxz`9u|U zy8v-3Cj~5JQYR{I$fHrFCZ+{wmkMT8*A$HrbRX{CRE<7QmTTOV^6QZCXQ2R`0)dKG5azIUa~_P^WH!Y7SqH&-V}Tq>-u8k?bTk7|xZnyKIa(!gx1Qv4vZ zVB3MQ3G&6!o}Lb$#?0)FZK)RS+@j<*xS?_$3oxbpUstQdrrHwpRT5+(%9TsSKsDtr zYGpAUlcM)VABhX~>L;b;-gRt{S3jNF=MvH8Y}U)-Phj=B^Xb)>SDI(2wm5e%HLhscor>QU9mp z>OKuLnBb$UKfEpO=Q_Jcx7Z|X8|4*?GGbOGDx6eHeT>4w3IbyUPmc%i#!vO6BSLs~ z(NmOJQ5^d1VLlsfo~VO@@(@sA_9cmW_v55Lrd3GeTEK~{d})|N38!@QxM@`+dA8b--8>c z*+@p>+{9dYUMXSL%^_v!_uON1-YNazD0(C#apLsDz^T61PCteWY;-S4ZlXqHqMl*} za#esYi^HxfS3f_*E9Aepb!8-gGdD1+FO~Pl$8#QO9Hg?HkG`7E+xt5N276Sj2Er=E zA;~9xLq-4KBj6x*C;G;QjH0&xc+Y=sia$4asOczM%ziC^NUK$o-+YR{An@V@-~ylP zOB+1(>!SbA_j}X769*pRPM>=J=R*B_?&T0ES?!$)DzxAJ>i;vVpFc6S5+96VqgrKN&LJtrKD4mdm5{M)uIh%2udB2%+y}!<{bG?7OVth3Hiss{_nSf)zG4z(C_)+ zzYW`UZ$lND>5=TuDY^7o4%6hH=l)Hh==!F@ubcZ_PmZ4Q`q{6ce2xRt=;z3^{ABkX z_uGf;dG~NI^6JNVR{d*Y-S-|m;U}?&Ey?tcPkSjF3Mk)qY8(lzVLe?j+}ZR3dI(W) z^86o(^dEOc;Y|C5hjdGN=LHGerRDBi&Y0I}e|~KAWf0s~K1+LZ@sByo+$*7J8TzW$ zKI3Cd?y-Gle^3A3!|&)pmGldF^qY!*Jtp5(!0k9Ey5HcBb$@GZpT|2(hw+aBM3z*) z2qed!k4($WLzljo|A;EIx;<*r}j!++n|^O212 z_K8NahrR#%C;e--r6mgtT=uG5y zD<^gVGB#PXII7hkih2sr@(BXwjh(>V$cD1z(q=JzGvS2Wro>lt$Lb*HU3i5<(x-9` zKZURaJp&WIp;U@ncD6Uv3$)CgYe9Fa_Js^>P3)P-e@56Rs^lUohnG2DZC}&+Gq1!j zMxKk|CNHc$;QKSJ#5kqeY_iMJtcpxZf&LtD72Kd}sI0MJVkS#3@LviNwo12GZp?o} zHb38$XlKseW(s`*;Y0>8v5UnTDMwW#N;%6UUQZU6N>8jqy*GzNhC{K^u+pjHJt?BM z@ZQr;{t-`%SpxY~WeB@x*dPCJ=zL#Jn?&U_ECVhJ`eVw!7ztPz^__boXxkY`2zi?| zXKVX_O}`(4&7crMuFUFsFQ+a8U>WG%%};np31d?IZNu;S-+Yvm~T`i-e{og^Wfy=ixN{uwM)rpV3G8Pg~6 zv45r_=X)8+W_~&iKleCs^0Vi8PJL9jj{58ss81Aq-ae&Vaz$s&xK)Mpl z2S;_gtlT@har@Xlzrha=tdLi<)58OU8O)Ne{o^Jc{2cKc=#HwXclYG(PY6A2EGM~1 zRh8@!T!dzY2l6{O^3W@Y&Q5KC*?DU5R|`z$s(M$P_o~LvM#+x6(K3%8D_ZIIcF#6n z@I&#MNN3{+veT!ML>wNre*=(h%5Rp<(a^T_{K7-AF@J}!KVRMI-=!HBwSOZ;?Z$V} zb9>E^x758J_-GazmB(H+ct4HRh16&fR)kIT37Iq(e<+-vgY#VeLG$Zv) zR9$631Mt;{ZRcQJ%|3vcz6MjTfU6E|2pBxk8R{I#1}hq2MBq1EJHS*7a{l1 zAJqOcoohc6dd{YJSd(VioaAxn@K3(~{E>YcZDj)K(9hw|uKe+=r+FVKFYqRDD>q;4 z(JVmZ!)V^H2`G~4BYh0_)@`uSK}ET`%r{%l%^`po6K1_KI?GpU;lHWrFjQC}y%oP? z-TaMwQiy=&x|DdxPeG0Cn*Cs7QaBgABB`@Dxku>1rvn3@_wkvnwxy9UFt;v-;}clS zxDoDi$kJTDZk;PNdh?FR2M-EPyaV*=YT7B|MAx~+nYuPxaE5$mJ$>ow3I{3>K9f<` zZkM5X6XnJu7!}YX$=qDww|23?@BUPDAk2C6zHz;KXkv|h|3lz)RIawkCVFQ%ALZ{t zyy8Ef#H%Q|=0;yIA1KmGJfj49sm8Fo)f>+Drn%tfEC&`o1zU`LV;4japGeVngo{R$ zQDOZ~4Ubeidqw{n_hi- zVDtE)Y(K_xlkW30D$P#m>8YxPpan*%8EA(~2#tPh8SN7svZGMMEXI+_uWcFVC2QW9 zyfkLPktXs)o#DLT?^XQeY$rXaz}TkqGK=LUuyqhK1S)tt~TC?-m5U<4{ z8fnQ=!_UaGTt~=-?p+@mM4}z)C+u^p%h+-C7Bh(BJA#)Mv;AvaL|XDf1v%tT^G!Zl znDj_y)jWfVEJ)QgEo<^n$ko$b8T>6p9T*mRr}$iY2@_`$QB;#hOPIJLIG7N zZuTk$nb5Gv?<1Dm8%WuegD(V|cP1nYP}^`gj_4fDvC(+#9lg}|edG^6I@Nb21DG3+ zqX|=eMTN~f-#v{Rtt&$6%zUEe_p1Ela-ZHQSc2$?n1lFDOV4=sdb&a3RDA^*u{C~P&1Iq+Xc@}T$Cj3ImDfR45sl&`Z$x8OHr!VBk1>aRibCQ z1Pi+E9LKmR589u2IGUF@Iygyn>iY&fcw<2GY@yk7(JPNi4;|A;5*o8WbG6)S^m=8K zq1_EtP#^BTa)RkneakBhX;52QVCHMYT0X1fQf}OA=_7*jwc88_*rHSDI4NLGp6?rJ zE3=TAeiC_T%73x%L4VApWG}UvPtW6&*vyCM4+XLB4fvZ}=t_M)!5clOk}$d%2ONBL z8nd7-8HbvyHQxyZayzZHy+TWC>_XP>s$I|+SgKZB!8}*+EsJ9ukXfAFzqq#UQ}-hh zyJPdreon&PgG9uYh7ExBTcU-(rer?#;dFX$rIcj90NbxI5C<%oPgdo(OAM6w9|ZFLf#Wu5SRfZnx5m}B$DWazP^eY}-25eD-4x|wZ%mGJR9=jr}pIr2u z7pM+>(W#7j-%q9G>P0U}2tYw3bLQH+YB9}HlbRmLO)fMX?speDpz|llc0N;X(6*;3 zRp3S@qR*H6jMrga4n~ir*fORW!VYTsvARHJ)|+Y=-+M0+C1g5$R+c1lpFXik2!w07 za(w*W`^aVy+a(x9L@=?AeP=E?-f^NeMY!AKs=!i!kihxrt>*~GKE;HM;aGL~t=%lI7Yv0*9x| zH11_(R;VOXb-h~iXeZGDZ?v)#q2gQ=g||D z2|9CX@i@MvwPLe|*S-zF;}8?b>AirtqW9jP_e1DEAS=232XB18%y5El@1J|^!D{>d z&aw;Nd!e-FiIe%WNXr&9-fHAhzs`B%7$4mDoVl2CWbTyH;_U^2rGu|QpKlagbM5`0 z()LqEKB0FU6w>8#`B-je%}AAPJ+x7y7b{`)^;tw9x?hYlU{Po3;mG|*@-J6|T-oYZ zsq#}s#fGh25IelHo2j^$NY~d0_9h)^kPWB_->u#tAntS?TvtNd_h(zqmUBqv?n+s+ zZYxUAcaAjaS&DpoMVDFqK*G3Kdr87Pa^Nq*mxa7X&J<4#A!S4}(snSHn<=iLbv?fX zv41UyHg&))w5ca!f4*~q=f?f}118+Az94k^LhX+c5Ew3=YH|7crFj3*C;6a@)V`kN z3)@?GO^Zty&9$?o_n8{Kq-U;ynQ}s3Gp=mch^i47=etQDMcH@63pZ$*%hMift4x2i zX>UOxBejb11zgFu%&mCGz3#8|^;VGQI@1PjF&yGM{0}5KR1U1rL!TA?4BLe)9xxhnUehqo>$8sb_IE9)VnRFeh6 z>+A*Xas84WoE&>%3_d#klw!2GBSfB6o&!r}^5457X#CZ6U zCT~7B2PAF8u+htvYuzr-`1I$zW+(*>1g&u_@@_*nB&PeD6xI8+=2aZtECHXYrm87i z)SLl}w6FX{ro-T{Md+pCxjJIL-*4fN>>j!&u#LjGQ!c`TWpKi7b^FT%nyp^~tz~CC zzZ-+GW{MaI$ea<#-G?tuyKIhQ!);{@h6+U2tLnK))2($;nZ*V5h1Lspl1>QFDACUQ*W`$K;Rawz(nklNM`t1mP3t zk%&Qv_aT2I`#&=Bl6P zi9Mtvh|^44_Knj|j%ZF7z6lbZaWb~cXLdq0=P>Qv#G7lb3^o$&0!vt_o5%MXYI^D^#!U*!pnd;9Q#TSAQG8eQKxs z`{b6GH0X{eHyOOw-;CTAr`>Z#A_hlL}YBzPKU_$=qSr|UH#tWkye|kLFo2VM@;BZ%CQ!nr2bowfvYuV zoFqY?Xw$=I)eZ|VaRt}By=ttcB}5`mQ3$FyA)TnaHn&gDj_1-Jzi~-oTVbPrY+L_A zbQs9CaVjz@4<2%Q^ovmWmF9(ppSu59T}f5<*iS^}=D}*0!QL(^uvGFx(;|GQpyVR_jpPbcSVt&H!KXGp;7tzLgR+_x!4v{8>i+GN40tf9B- zv_p>N)W)4_CdtQq(_~6O*yi(0wV=14YrLk|rrd2GXlu;LvdQ(kD3GD~uYIu&+{E|K zoD1xozR-HnpcOy3`LL`Zpq}O4V5R~hm?77cR0Z!BfJl!hQe#ij2Fs^RWTIpR$9D5np$x}i7;cU6&w$%p~nV&;wPZ&3Bh>JQG zP{W?p(xQsR$ccukInX*L7HkhMX8V`M6RsIwf(N9zU~P^ZqI88orO&vYOpu-}EshEy z3#J+3K=+ZDtULQV!`+&Xbr3M;h|-eazUD;9SZC-MB?Jy||h7iTo4 zdu}E9Sij{rwcJb|4+e2wB`hsU_Bi^VxQF$#pO{w1%O_WU2AQGtr*3>ah`T5K5Muf! zUVxX~Q1DIhGKZ7R>oa^8BiO~`@66LcH|%eWhX<}2p>7p*CBIM|4((B=gepFKCr?o12*5k%hwxp=rO5vyevt_u-=qA z8C>cY$W@)@eLV#n1Fs^7OsW?X<;vvoqK2~O3k9zn_&J#J_niBeShDR7@Q_)%JJ0sV zy=b#MpK`z=Qud0`UtH*~M;DwM2QLCWf&AO{|HdyL+$|qF_1mj|VM>3U|KN+{P5^%K ze;@R}d-Pv$`Twn>|2-oA+eZFB*lI&3RiZc-+5r}C4UXX?|Dt++ zib6A~-YI;>EX%ehS-mYX&6-NyT;D=lMQsdQ!OkHaoGr#KkTO9{h4rggq3t-;O0SOZ zEAHx{_-{KKg3-&vPVG%@3mx68mIokV8W#b`9GBwSY_N=SI^W>2kN^boos+&(d^PSezv*o*L?K;nX{;)71^D$mJJNJ% z87}TRp}!mb`imX!XxRg`=}8_orX2;HenH=K8-BpMR8DM`qpO-^91Nyk^OwcWUpML8 z!lA$OteyfE8JiNC#Z0$2&F_DHiMv94`27^l;7iLz6cd*0clw`y!EYpOu4=s|4XfT}`6VW2`g(3T~y#!<- z@?M>*bR><`>AMv;Wf{7M;efw<;#^l^#~bd9xPpveDKy>wvs;VUuf6$%0Fd>%pZv4% zAX;&^_VbnBTq(L%VS=lxm(0Yb20EGM0k)(fHQ|AO*IPdRxz6}Rd(yiHEU#OioZ80d z1-fE5#SOu_yFX6+5FSFGoVf@zl~Bq#Mc62a{ql^fn1x)H3;HXqyQmHgpFIZKVCH&( zUmq5U`&+TJw=eK)6?-a)gpztlMUes9Gf|tYLIYQXWvm)Jp3{P>6gLvKe7wByid2D@ zK29sjzmg8ED5rI8y(v#^C#8>j zR)I>9r9PRWDxBZR#48pThtk&Ot{5$Z z+wT>G|7cpw2v=*H5jI?3M69KVxpWsg>iwDG01|6;ZJRj{{eO~NYx@o-H7o*u zqpN4H>W2qxK&ZN)wj@~Gd9=M3$2EH{4GlM^MC$f*hjCqx7cl1;&nl!mok*n5vWYsX zp=)uty`cMqeBCa23)$4VM&is}@q?4jH&;lB_g@2%>B9`9Zl0QQ*F(~QH9~Eoz>7{vi`V4tCv)fmHFuFs z6;c~;$GMKfjz#l>Ps#A{&G4#$Zg!Cj`V;vKcDl-!dom2mCy6rD&2=|y@&z&>B?NbZ zzNUgEr(zMR@&!u+guO1}b@SujT4R(mlo}JXrpDeIq_og2mfN+dI}RMe7dgraYnk5C zeyjKUN@oAi>h3bomGIxVEh?d>J)iy&%Z{l|H)37is_$otk?^$bC51{OL`sw1HVFa6efcg61lR`EG>A7`%LeD1A_GmH z@WDo0*r<-;pp>QR!Xl-9w7~QAKD?;kUaZLku+JOBOkVxc6}ok&S9$k236J(2M%gi^ zWlL!!jt7e>-@sr+E^8)>#D#GN>#HSba9%zgY>7E!J42&z#YPr*A6q(OUgMZ&1|gn; z**D0#{iyZH#rRMV&UO_DZW97@WmJ5&LYeEBgbkTRI9DYmTj*Qsng;<+KPIu&@-(WQ zHUFts>kHYr6;3-9smVZFDRH*PegH@;{U~9V!52c?R({zAnv)1fH)e1vzGmLsB22=J z@Cbryl9Yr)X+|auN+!mqxNR4Z&N(dP1xET1Fqb(bt7}p2$a&wX>(TrIT;I)&#Mwk$ zFM-yIm=P?fp;P_UTPY&9@4VD}$K>LrfN$FREF0e}lT;YR#(a$Rf z9gXPQu9OVliOcYS%{4P``fed_l3jZABe#f0W4A4y28%mP745+dKSWLiY^RnVRm-_AQ%5D*absT7qi48P5mn(1+RP&3i_06d zyiDY&k%PHYHW0?CU4F4L$ze6Q3xOv`>A%+>>+%JaMnkRKsY2WUq>dt~6*zLm3B3MuSA&c?P*K$n?I z+I)V=KNm(|2^gE)c#o9Nz(~(AR;3&FB=ZI~lS@FG$%CK~)C=E8bUlDE<~|(}6q;I6 zoDeVul#G95+;|__->VghklLQj(M4D=EIIGJ-QaPVUifV{>HlVAwp9^xY2q&m@gmlo1neoUxrRp`Mc`KyOVSk-$4xM_u!7tqHQ^sX#u=iBYQ-KB*=Ii9k zUr%*J{(`ElxkAIYi!`^^rnP=ZwWoFdMl41wb4#nB*_W zBLN(DF})v{m~J(ZF#C!u5m~r}>cgl_9NLn6ynZx2hvNym zxT{A0x3~O8u7MNYO60Cr)ukM{0hR$=MU#n=q$E3KwK}{VI!lIMB)A1Flg8NX`He8`g*p`C zjkKi&ZQnk&BsgUmv}RBQkbQXVh%&>o@k0UgNt3e;3$c5nX>lG4y;cOmjB3VMOX_~} z3E?7A(3nkYo?)rpPJzP2$Vq*=aM}nU3OGIn;6UNN3g9ES(@ztHSwAbva&wo#F#r-> zFwOARWYY&GfOdD5qfKBhtfk2FoyAHj1PrvaIQY1|UFH5!VwJ~CL|p)iNmjhwH!hf~ ziB40XOyLNNlqWkNh_`lTXrFeh8oFf|AR> zO^t*Yl_6+6gsD0NjocNy<>l2@s8b#mo5ZQquNJxG`+DosUI!R)*z zy^T~iY?)TL#K&z`MoCS_1f(&|=0{*YVH8n>&$`0${gFloWT~Zy;Z_{nnq*AIt6d$HWsY(M7uED@ zZFULL!}!I%V_^jNbx86&aKfwE91v9x@RFVMdAWQj(OgFAyT828*BQ9otmo+-IY|xC z-}9G%t5_Tm=3L`|R9EI?$)w-XJH^ zfi5LXzb$VBthF>-KxL@J(}7CGBQ7g;f}fUZPa7(~ScERV6d*P9HMPP8I&hd=JH^Euwyx^NJh^Jg}>c%rppz3YF79Jc6XiE zqHA3Q#V5T|pfKx>ixlI2yS%`=v`;T};j zgTEE42Sfl9#NOuN-$tK9d#f^zK!ar|&abDF9flW&1KP* zb{^k#Tphhj>v}#xmLBvT+*;DN$3UJCt@TvAyosD&Xx`@~)!g=UTab-?KRT#KSWk{_d>=MLPR-f1;&H(Ckb~XXUP_R&GyQ*`BNIaeKS#) zwci+&nh-Y)cNiTDyoMczSIQDK+^qq<9vEgB?4Miz+`}7Ep|Z5z^`i4FUmQ1w8mjpo z9$1RT?o~tV?8Hf}edfk&&WA`MecYFhvHZ4r?1hUx?|hP-S1ezK#(E@xJ|Xe%^1px4 zv+SOfzsY=Fpvy~Hh#Hq=VnAHN{fXP1_Ol^DO z?6gp6%AhElVUn)S6&`}5;EQ$h9NWFj1fuAo2@4vMgvr8J;E8DeL)i1KMhcqX9Z-cs z*NuIrn2WTK9q!xjs(`#Y9ug|Yo4GZo6$g{-eCp))Omg1JEEhkUE29nZPH0|UOi+qQ zwrB3Js}*%<2s_46izcH0WR5%I>#E!rBoGo{`80U{LpV}pll8{xDVVA9v7-2KdtC|z z4?{b@+Lho?*l3AbAGZ);Tpw_GdQsFVx;ZWa|K3~8MKEwGSe2W##@R)j>4eh&34+Vq z-=?e1Z@42s?T+G@@I?bX#4Z-UuGO&fnjelUn^!)Y>@_rSUEGp@vjhPM)92-5)X0}x z)S0wtbYFn&X9T_Z;@IDCar?f|zO}9-_dT!wd@r<`QKY59-&5DFOqN98kA*zjp?H}} zY8jQ8ovcJh?C(o!eDN5oak%5mR=#KAAY59OPRtR%9*J(SYC8r~(ipxvJRv~v9)2Ic z4hW6_g78D`MW{nTiK*l)uigQ3L~DkRv$hE-%eo_$Hd*BA^kLacxW=Sv{(85GxGG0K z#6v~$p?R41d&YA!A7>sB6@+^- zaeRl5ptG>YmW0c=zr0VfoQLUG*=qKmzY{P}x3MrL1ahS-K!O@-Ualay93=|ulgIeK ze+#2yU0@N)oc?!vasJKFVyDF)PYVRJb5Ygl!lb6fhHp+%&l8ORy#!N7yrF^u`ydVo zkMvW>)%<7MS_85)uhr>28!K~Fdl+59qIiPtqg9meXzcjCw!gcBdt?Pa|4dTcUM5i% zUM?@oP=T?_CD)&yzG{{HJ&;!+MGk9L4mMk&AcvZQGlo2x6(71XxZFH=3F25#07UVk%6;(H2~z#aA z;pf-2oq}QXa-DKX7U(*Hz4C91K-qaIDqXF#se8*eZN`;o=w}|h&S*HEEY|^g7;&aw z{z3s(T6v=^4$6~_n>bhT+dR7It3Z{SbX2elj%xYLBtG#_X;I$`cH5$6P!_)AD(s=e zB8+*h82S5FZou+AkSOZIU? z`Dz1%d*%eixc#Mi>W!QcJm#eNcq7cq_WyydQI-ce4iD90vG=^{NIolXqmr4uZkT`mIJEYPuA$ z;z1FyfA7%IUvy-WW(5Trz=%bCt1qw5)+yEi5fw$oORw{}rPa(=Cb3oBIt$$sf;)bF zB-U+dPGa(}lL3%CD(0YSY+2^qpF&U5BlQkXh&IuEr}?%E+B>mYfOYDIAay&5@0ROk$|YEKgDQKRwx= z>NK*1UO3X6B51`>5*2IsWfK!%>0U-`%#1SED*~xx?ZyspM;6Csd&L%)HA$Lt3;I$6@@{uL3+ft6l1g@H`{Y9PHqVzPE+ ztBY?P#W7yR9H9jtD>ec*#r7mc+9nrSV8$inH~bmxzAHtZtC%W4KxkY?7Z=ZSjlZrJiidN$8t|Vm8|OO{#^1O zNX5Z(Z(2%{rrI3}E^97%S|DAG#VWZ+0)GHI*V`b;e^shB_P$dTVm z&EY=F2_>rXSvEhZcAWg$tc{a+=@a4B{j#XcF4d;Mb>K0^qG~=&*5UjoMmyVlp~kBt z3(i|5tUhoirg??r-p*lp$!>3slN>R+#Up&KzMoMxw$f1}F5}~0zQKI4r*3NJrm?Ly z``x#!8qt4rS}JffVEIbQgGBR_LQE!oK`~`9F_eM8jIv{p%2mPBNBbQv0dw5z;u!Z* z$+ubM0UT+DA3Wl(vYNO8@YmUYIa8HkTR_ylPbERnavl>0t1YQZ8AU1RI=T)+CKhb*YwBM6`8c&t4*G=iu1qw zmRMAcr)Qzc%wbxut2S6vI4{f|%U=*??NKaW3ZA5A8!tjzU(BAo?}~uAD#0D2=XN#% z{N5y!q#BQ@=dJ_rwE11S)Lcp=`JlG=gnLyTb^{rQXlqJS7JmrsZT@l5pCeNe-z3>o zA5H2?yzHGT?uHpK?c%=F+Al7w`Pej0YpgId^WGrf2v*uJFr|Y{r{rQ@=JrHGe>%Ib z)CkH7t_{aj+Nk4fHaUSu-8rYTd*@J(lX$2yFNqTENx}|q%`!hl+By!uk02cd(-9Y$ zJtP;KFaI=b+=y-oP_PY}^p3640O>8=PszDpi?!?JI^W;I&hw3+%yUuf+|QEj^QH&w z!z!^yWhtqicOc2NDUV;8QnIHM8D_t$zj!v`2~5Lnrd#AMPzE+^J9*>ag96PzM!N<- z+4iWJw3sdog?!%mgyFi@|0Tk$evO%s93$pKW>(oMXY~Zq<=H9UDVC)(Aa#B-8R9q@ z00iCxf~ZZ@;*9K%TVI}xl|A-r@-?a(vH&o)uB6-6it*M5S}X-v3Lo*ulr_vl{8zJ- zacjgMFw1DCFy~x3rQ86`l&vChmm#Yt+3P^AvQatdft3a9LM};ImsRvz+KHL3N072J zX=~k8g5xnq8t+I34rc{ZXZytjtO8`>`ptIqL~GW(N3ex&9Wp|20<4R}njQ1`u7h!2 zmpVU3$;O|wrJf8Kpm+?rQ`a}NX?HH5i=?f{X`q1-)`hl#BwPoF+G_7`%nhJ;t|54L z7Og6dz{731M?jQ12d3Kk&8=3mwy6hYBnrSdyA-Jh#%E6Rgx8gir@KW2BAqNJi?GY# ztdvNqMe_QRHGy>qsQ}6g+@@$3KE7Q@59uh2pE)jN?_Q_yDt0RX$jqzD>`Hb+e&D9X(Fy3SZ!BEu8`zLD0Q1iLG8LgB9}=o7#ZDW9$DminI-TIVPCjwj`Y zT+GX}9gp2Zr6@qo_)Q5ScCKAFtrc@(9I8pXMj_$i6eczdet7{b4Hk4DdOnS^Y+_v` zQ@qZEjy$M0b`D0BrRf&lzrY9lP^oi`wCtUG^Fyd!YuvsOr=xJ9$C7WqoTO&ob=%xm z*msA6gq4Lc06w;bSd`#TQmh}5k1bcN6D zXQd=0d31)?_HACN*3)M48VXBcTOKyDOd$TyP8R7e^Up$!IN9_1hwf$USpzDVZVx7dDz&f7udEc$jnrm-kp9YH6iFO(oGd_VDS`S-^>HZpq@zk-2x?K4yx^%*1# zfUvAW?_9cv8b^b5--*-bukemHqr2Gz0H|k}DYwToY&YNDcb=<;+ps#$@~JqeT6vgZ z4z<;W6unH_Qp*_ZO&eG1iE|nwI8rUh>&3=A2dl?QSSwAdXDl*Usjh62^P4T(QZEm( z7fFlv%V?i`z2>tNVA`0k9Zex>9Y()f@3>TCQQEkxZuxNiDMgreJ25ZqNw(xdI={ zJ7H$m8fD&SEoGAoDy{hh7uSL6ZPa^jc6@Q#ccc{cnGr9wrfR=A!2Ru*%VnmHG@paK zoyekN0txNiNvebU9U77?C9l++-U^|so2bvM#C*DSzqL(oA2^&$u8CfD?2j3crbD(wk-4!?j^rVSfNk! zGB8~!%ef_TBO9r6ts{bV(4$w>=y>8QE~tIga?Pg1VDHRSu(?2#)OzM57iRw-3%X%z zytwI0v*_=c8BSCI&@4N}Ia7iE1HC2K^f;B@1m^g7C}Lm#u}4h7ySx$mhjHN9OTA|A z-oj{J>pXT1E_h4aQLfwkMva}u)06PP41SPnyUG>>Xi{^g)X6(U_-d1gWXa%0s>5$t zmQxm$o#EH|or+R|>+E4mP;-dLw!i=PSIVOCwu~saW>E0WI^Flw%&{brMPJSz^0@ElSsX|&k8$+uRq-=@tevs zf|axnUSe|A$f{Hokt4F0jT7=-F2F?X$!5&(x-K#_gQ*4;41%dTU(AiBBb7`n67f3!UC8 z(P>y5c*)Pl>g0ZvUm`|eb0Rjp!| z%HzEo&*D^4Wkb@pZ7BHFYkU|dJNv=>9v9yCK2P?muK5wAALnbK~O#^mB?X!_FKCYnsiFW9CxA{G!i;GgG^9?=y>lRJzxL%?WZem#UGT2Rd zH|Bd@74^!?f+L8{rfe`#D?#6fkMM59MTP1hq02hGq92Af8Ta ze=8?nTU2@T@awgjtv*7BZI8)lzb<+5OzGs4`6K0Ffn2zTM{R=?HlmCkza>Nb^&Zj& z=yo$pc|Mf~DK&GPQ;^I@D$ej|&aGy(I>;D72GpNMLTGpThzYt8qhxiDoX?b;C zZhB6^-Ufg2=T05Wg@4iR|CGRkOD~n}nuoYZI{rm({roKVbNhg$qzj69yvh#wsGcqH?`%QED|F&j zFvAenAA|h+kI?Qji|9HdTW0OCzehEu0p#UlCGrg%e=X#{|HrRa?GSJmjO~!||E#q9 z`%C4{r|f#-*fMqE{$GFWpPK%W-HRZBW&hPUXV())%;~P{U$UG3_?llziApVS(a4ym z&3_xV?XDM%@ly}Uzklz)UPXKFUeue;|Ib3X|CkIpv0Xo=9=5mWzo+i{|z z7256RHmJz!9hA}6?4clcQ3G;q8}yb-rOn2CeG$u9`SX&8&HU#?2eR%c8v~#n)Cot{ zH+%s~m-bBgJGokEdO(QH@>LUFs4~85CYAuorPbHIUtY-ov{^fVS_frrMtXo+X0tNH zH|B<&^>4Al%38)vemWCF0_j+0aTWc2zkd5mz0iqACW7z(!0f@;c^|jDJc&{VPUto$ z{N%14>siALsN~Zzsdm1sm#>iku#CV26ztJV{(5&6_uT5YJ3{Vqmc~`K-U%p&(GUEy z08Hy>tZ|xqa-_FooosEx`2Az{{f|#P#^&fBjv{DVPG(ZvWtsxZvUfp)n9G4Ae0+b7 za>Ha3&*R6z1QGkb_TnW3zDbiun>HGlagaSqdh6|Ov8Z89F!e1&G@83g;X3?jrcgCeycf%LanXb`3? z1R^DxwL~3n=cWkBPg%|EZ?n>M~l^lJ0{dTco7Zq{wXA`ctMp$O~I_|wcqC~|<< zCTqyt@aL!d2-@!)t89@+=uE%6-)Uq|+3GL&w&sgTdyDt8K(BpGA&e7QtEh-(s(`8y zb<&6bovY=0T>CQT`*ZrD?>jl7)Gab^#bcSDfBL&*$Rynk*2a@l*C*hH#< zbwB#!0j}glKE9Z8r3jN|%y2R4rD+E6Q=lw)@ifWG;MX`Rstj?EW-Us>)%da>Ea_ZU z^hTXKP9r6?Mi#+0C)d&{W^9b2=rjtFS&+u39br3u((vBO!Qs?+<9rQcx<6zU^kZVF zR*ty{0_M=wsg44#wU52+dtWQe^6BQf&t^$Aluk{76}?8kYSVf(-c_tP@ON>1&xc0S z31V)J5|7)E!By0Ch5ztHy${>>&e0h*dh5`iEBe&QD;7-(7sqr3kj<{}C*8VJG7EM- zeqra(P|~FEHKW1r)sADgT4y9@T4th{#Hrj#zAYDP(w_|gbxHkgwoS2BGCQtu57pDm z@G5iW<2U%5f&~irZ)K8UqoDh`qc@h*)eS+-KYpKgCS9+=wWB_y?5_&ESzoXTZ^NHA z$nI`=p;DJKH?@*Q-5f%8Dx|V>_I6HHJ9tvo+7!32FjM ztDG3fBUPD{l^T`&7@S9DB$kr*&F#xbyo}gck!AQun>OGs6IteoS$|p??3t1C{7A^b z*DB=VbWLg0M)SLrUImu|u~bp49{R%rz(x%QoMk+6mT2QLQP0=4^UAhwBeQl;I7!94 zTF#G3W^(RL9oqs{;&jSo*xU->Kt_N~1m%A~ zjGQEJM?~rkY;qUu-C6GP&qwT`5G+5kHB@rPm7zXSn+=ksv8NF{+Ov;b+}tzjq_;*} zBRvE(Q$(sVCSKN&z9OuWsKJIXA4o%$1F=7l0Jjp(4>@P*tS^l!rJ_J`I=3P62z86R zXyuF>uMzWKf98%j#uISdpqRrsZ*t?qXXqA|!(u8wMC$sao%Y<4?=u?a+*p1Z6=oZe zqKnr{A%C3p^7oc?Obif+)&rKQlzFt<%2U3$>=A=bT;o|miwTI2)JI2D5f&@^;8P&p z_w#-O$*W5iAZFer-K)5o>Ag|5kmbUykqK&@@m7TsKR&%5-Wcm;m?1xz>rP6^|$@ zK}{hC%Ytaz=OhUeRZ!N9Uh*ePN+>1FI3l@bti)v>!iOv|+iqcO04zyL~TNx(90`1b<}pt^ugB4QMsbVGy0~#0f`Wg_eX5t-?C= zM+A)<-lq=WO0fI91Acs?;5v9YeK~R+kg<^*Art-0cQn!RY*IgbLZ%eooZWzVHSfJ5 z&eSOLK1cUJjQ}#qZj<;iiC0O=*0(Y6RVro$h!s8v2d3JBIy|P=Q53efrmjN=4 z2`kY^a^aYIk4!9Ut)m+xj#b)peNQMz3#H)hxIPN~^mHdcEP(cwXy!Um-@k={}=o)7P%tkZHDIsm~!xdI0&;t!VAsZ2> zPvpyuS&!c^>y^(ye7WPsH8QB9j2(fgV}rdi*wn%WWeysPWDIdOk&l$~`#FAK)tD@-8;$iP!f- zZPve=0jraw2b6u=n-K5iGq9b?H#=q1zR8$H-$1n3bt-K-EZBF_m+p?>(fgoXez-f5 zgOch%R5S$5_4G8brEjms2MWbuzLl06Al4Ok={ip@>(SivOWRn|8e22L4g#An2I!WOeJctST-rv0?72Ych! z2=QY?SS^Y3qHN$*6=Y|ew#*D&NfDD6$=B|^bQv1tt{sxNDZY7^RChtFsFxf*<)R5D-%#N>6 zUBG5U9!Q%`eyBnDo;MoFKi9DMy{p6=wnf{yZ~byT&DX-MPNw^U&X+}6nC$v9L=r9e${j@18sQH*6+@@WDoV z;+-3(F#75nJJ515e~YQph(^{GNsNMBdwS>(P!!U2)5D0~c3`GYMsqVw6PqTHFe< zgQ}!41@Ih=^LShQnw=0;#ubIcVaQRQmAXFftzG`D>Gt#aK3L{F?Y?91DwXk7+V+HZ ztm`$6=<_~H(A6wE<@$f6URHoEQg{NhY6_MUR1z(U&(5nfBpdqSUKge{oXrzL8QvIS z(VwYYR&HH(hC!!CRg;jr@N)U`trt?w77MY)O}!0`NpyIAY5q!30GPjSF(Y>CG%CG^ zGd|_td$XLe zH1rLCC{E*okl7MJPgMfy%=P?LU^TsSJxaAO2_L++vKM=5Y#w>21~HXVTa)${_0bH1RT=P|yB0}KcA;li#HsgcFGcr6XQ9nk!*lM8YydsOP>N`8R27>VwidFQ5MW z{ZapLj(?k3ESf~vhSU}DluUKhRr(2{fX>0zkA#3ws36-nP_Zd3TuA5^%W|%ePd~fY z!0XuE?B+oE1=GRGrPsXDJ|(@I^9D^{3?6IV^L84QfI2lR0#yp|*)J#`V3)azbw5i4fTsyvkBv za!`hxUu7=Vklp}4Y7botAY1C53wMWwT&qnUVXq^DjFHtMyNW_PK>du-=XMrL$&5G@ z-10oI3cl5c+~X-++WDqKq8(iS6spyJPVLUW;pPAHvdtii>4@}6QXFRD+; zn^D0dP&}w&lwn+gOSH;yqS|~1!QxrS7?63fL#EgHTsrV_jwLo61?okkO$ZU8?tCwu zGg4~7^UgWuGBW53x7}Zka_1U|Tz@u&FkNM?ov|>DjS3`4Amus{KDWC&PH8{Va*8vl zdwHc{l>bSFRH+5|yFm7}s7&VnKk(qJAHMq#Hyv9N*X>PopHw>amGw-RrR5fILCVyN#Zw_I%B0^|p8U1h? z!LX$DL9x-LJdq%k?6!mb%keO#`qtSl+}OTb(_i-e`|siVokxt1RTd8#SoR@i%It`l ztbLLh*&TYvea_y61r{*G)9~7m3se={eqN?!s{J=vtjnB5w#&hvV3$~bhI^(<{3(Sa zGg_?= zx&QEX%RIS&n5_6s?%>65H3>q!kd>iftPsw`n#_uX8qk61O5Rli;f zw-k}xKx0|%32CuSNpj*PHb(gL0nk4VPU}eJtEAO)agj~clnffv(WWvgC3vcCUuv|- z*XLHw)sk%h_K|FWShFc$`X(*2;3d7gni5ftk7Ojeo%I+MnA>)@s)snVvZL%4ic^Ec zN}XIk6&p7#m$1nBu1WfFJ3mcwc$l@y-AEoJ?KZ^!LhU#S<=B^GIuSUMWPhKKmCwG1 zV|gVWAZDKW9X0f|%qt(>#cCm6vlTDdODT74)q&EBHy3!rHmODT8XZyyQX^%KG#DT^ zi<-#nC$lE{qJtYnj_+*NDq*>GJ2~1ChplVV-a++>A2z3C_9)99gl&qP!1l5Ztp=5lCIB}JI!$L&t*3HikyOc{*;;8cg5AOWTMcpQgl2cQ1VT=R3N2r z7uyol?@1`BW6C_E<%blp9;&wyAi z6GEtU(+~EJ`|@_?$0I$4qT36(Ts$Ib0r&E$Uk@M{h~Q2%?$7!9*;?Atd!!=MAe)#g zxzQ?KW)S+;-wBa#U|658eEeC3PE}Rh+KH{%wfER-jpKGNiKXRFlP{7O3|vYhWeR(t zvOD1uq$81_BpPq*Ho^C~%)*`{lR%g1$;6!0Ym5E8&|WRWFR;08(ou%8FFKen*gRG4 zJrXMRbDDPz)>Kz7&eI~>Wofk2?6H^?zx;Bi#&gwtc4PepepC7==dT{~QXDKgg>B-- zkCGA^@4r9C(hGMznF*KM6l(9F8g?w0_vg18A;<^rJBxAKBcwk%1MgTf8QR6D&QComplV*6@$Cu>t?WVS3 zIXaCt+t|*NY2Z?i<&JuBKEo4-{EVd>8v<5^cy7VZ8T0-vVP03g8SJML^ZQQC$4MEP zqL=pE%`TTHS$A91Q{(z}xdo~u$I1(OpHOXW1W+lcWOX?yyyADBb%ByfGtIQjw%Z9t zLRS{5`H}-IV;^>YS;Fj&!4d@qH>o?Xv0oq$tx#=v1d3Pz4;rh+1Pn*8l(A)gq5Hfi zGXE@-gwAyG6uq4-S=g0CJ}-kY(*2UU7C??tUxPm*ADaQmw(QUj$r% z>HWF#)@*&nb!&!tk!7E&S_R&wuF#}cHMqSEn92+h3P~s|7aN{jVE6S71{?mX)CJ>9 z^?+%g`kUF6&CNei+bIMLintGmIJi^y!>-~3wj{c?%r#iZms?hX^(B0#`_8w>Z2ydu zOcp2N_d}NI;gF<4>WoR#^ zg7|EN$VIeh?LPjVe1@}8v#g4l8HgZGH&!;ogs^X1hI#c>Zj{V7Sz5w0YUG98gM}Us z1mKs2?Xg!iUd&Z}Ud~{ZcNe=o1N2%?Rq~8Hd-E;FZKYwpgyJfVA59i?UuS-Uk}J8P zspd)I1XpDgT>T`xVIHzfq)nNNcJDVX>_?Z`&99pywH4i*jtTeM9WAKuU>G=&lu&Mk zwN-#-^w)U+aztJgVj`Jgm|YsWLQt5Dd=on}qiJU=P4p@VCK1DG_VVjfR2mgeM6B({ z$C!7n);c9Yq&W*m{j~HZ#1f zR!k~1qS0LTb$463*He+nQF+v})3q20pvqbHSMR0ZivVS2L^!dA!Z-T{}W!1uk5Fr#KY~r++50 zg2S`~t9>}#fF^~0aC#}dl%74_U;RE>-;by_P;q7|T!Drp11CkIWDWO54Ga!g@9r5*|NBshy)o!IkD%<*lccaA52(1~CTs;K@USyj0 zXCEZfmd9bl7=`59H}%H!JeZGqc!qN>I;+G9I89|S_-)$rb)?xt<-Bz*K80q^k-)>A ze@=O|6d|Uld;DIX$LV^kDB>$^bk^iJg_Z$b9lFraMt@2MmX{Z#-n7%OZl&XJb%?6Y z%J$tn&m*i~l2FpQMMX?bD$pKRDo%?CH@Kirw=L28Zbi1RVXMUNbx1N6gq3$OyKj4F zVhw#Y`d8wOUVJjDX<&4A?Ti=2nfT>MIn874PP>><>APf)`?iit^l%OdaE>kan2b7~ zUH;tC0u$q^P!daupQly|^X%m{(|Jk$-g7Nu&~}zJB^XK-Xo4Z7rLoEM!Yx%Arfuo_ z;TW;gebuhRW|l{a-UsO-Ebb|qsv1qb4W3c7D+V1qy*1Mnw-Yb2watbatIpN;oTnJ|dcIosW-N%HNn(|Y4P<|qy+zQK zs%c6f9wo*Qx@-IuBSDGJNlhDZ1wk%JA3J8AnRGib+@?uQdA8{-udY28ltlgRLg!aR zW*e$B>=(qN$b<+a5LX#J! zlW_-JdAtDzRS4Y-IO20&E9ox21|qW=wjG*zuJ0SHaZ_U9eG=~uGa@oATosbAz*q8$ zr)oQ0=Dc@s1w61@Fei8UZJ!KX(6wRD0Uauz@qUY@;D}XBZaE7tON3Bc*`r_~PYvHh z=1Ryha}f8L%-85_+1FX?*qe%&acyt*kteCmrC)GR(;w5y z#FfbjAvLL93gXJt#sa?$Zwyu&`U)ZnpE!Y}fgbD5Mc~EPw_~I#Tkc88v~titXkUQm zBZavmgqQE>LVc4+0+*IXb?8!kIByFyS%z_QxOri$F9#+QL2g}DUPm(c4F$aB2<~&O zN5g4Hb<4z7Hv5FIL>u{ea>W3k^uKtzi&VDAF&AUA~&D=-_|7d&I{gN+t)HoBnQWyh52 zTxKY=uxPobW1{^^TwArebzWiuQP(fXq>(NdMJGvpRCi2^RW|_A?u$!~=e_IEfFSI(N3tW!KZg2>+G2Y3#}X9!US4JpR>BGYP8u24(jwX5Fo z<9C=0`|6xo=hSfxoV7)~TrmM(tOiKCR=HQ|F$Jfqmu62zI3GfF&WIbep1kqRtaVtG z`^PefC(Z*0m)iyCQxE%Mt|ywf?O=2+>e1~j%xSU42~kkEsQh)WPd}-zRC;WK$<4sk z66&x^4I#BG3lW;+wf!QjI~diCZF6a+h^>O~^e*I~a!M7MPYIEHu!I49nK)Gbi=IluE#(S08d%%Mt6_N;dR_1-g|E8J}{UzewZ=w}MET-7GU znf?16vl-h;VWCcGY;UUwspHZ1|l4y1Ev>@AZ-WSzwn&7{6fV-O=}T)oW5I$g(Znl~H||=NXrt z`^WOr&RvE~Th)^>tU3!*IP{2#Al=6KK9zM>_WGTL+8cPT?c9iq>xsTElm)V^OFKyE ztZJdG17{$J(n=0ci7VA!UzcwR7PE*D*)1vzNfsE;p2;=w$tD>IrQVQ_bIT!dY15Di zNNAgHFHsBc8pNU89w)n%hZSU-%R_7X1?fXSs!i0cG&G4(jlfIad6zT6+k5xh$ZM}i zPv2B%eUE)6pU5wXC)N5;D2F`%5y`_{C>kL!ajW})uh%v{Ujeaamby(N>L+nb#LvF2 zcuB8Z&a`Ik8%e#)v?x&@Y|$~nZ-N;kZI|MC`gSCAK7$mgk@zFRV#4TwcjdgY+hD|M;NFnVk$O!xLEqr?$+u612U{ zT-}m}F4rq9JPPKHr2;d>xkNnLxuQ zf$}jt`Yyt^)|R7oEKdz6n<(@t;wA}hHu658e$7P-xdKbV#`}dUilx<}SHwP;+bcsg zYx>A9TJ)x8`|)QgB+?w?>NWe^OqY|Axwng}MC`Ry43W;Z$yr{{)Zr`)ys3?|Qx?9B z|9I6`T{n^cI#bB=+Wg69Hh2f-nEp%oFzH7!N7_Jfit2Dw`PPoPvkiJ-8|BJ&Mr(*j zj6263WOpGd9#(^}v1o0%n7wn0;hqC4ygn@y4%lTZQKkx{T~-I57OhLDuSVPA`o>%% z_`@C8hf`FmVW){d2^(T`vJ^$|@o{|EKAZ8}%+&sBFLL&8ZyGw+H^^6ZIjtWh+LQMuwKQovUfsV0aeu09%!>lf=r7ClCwUqy!e4vgYgs>$dWs$08 zP+;1=7MeyW;yk=Iy?;O_OMm4o@h|tZ6$0$)x^<_M;opYvzu~tpUBJ_1SdoGM zc8?*@wVNJ{nQ8b_=luMPaUMJ^O0kRRchKt}cQZl{azBOFJAHnKWB$8A{^GshX_u4U zko-ms{PP~)1F;iB-5Bg|TGT(rBq04DmMSDj^_$E8&wKU`lq3E(k^VQ4{x^~SvC0n3 z=Kq%#>B%}O%<-o|N-q%yCn2?J&;HYghPBO^0PN3ikPGsMWq29=4s1rsQiuL`n=ySP z0I1x>?f!-8Q zIzdmvqTEypeup)Gbh0=B!IN`scPW1k!F}T+5OFFF^)`OHF$*9GAFkW?@d*hQ2gK~3 zdA&&zJkO??7gugGFan@Pc78at*2gDAwp1Pjw&dB7Yp#HCzEtoQGc;0ez4Q!zox7{; zd#Sv?4|mi-5#shqJ<{(V28>b8!SjRlo#xFBFy(oDGOX!W8$iLjtp#iQ1ffmc3X6p0 zTf$CPZfwIZxo-B*eY^wU-uCrff>*s$hFfm7UbGIft6|~j1jz^mltF;Z0|vqYqY|tnK)M? zH)fy^!KkaHTr|@v8Dg{cEyZT0gQB99-s`~(hlb7O9C9hTOTR>eGX5XliV+gno8jtx zJ-VNdUM~_4CbeYNPxf|te7a`snAbvf`?6iv+5Oku`& zc&$X@mqohDz+r4eA!y$mkiSdlMY>A9HHVD)9~q46T5%s+X>B+I5vjNAd6z*q{`G?JVo{SkD)rifq7P5U_=hwL4d`Yyl2gb0@VLRQ z`GHQW!2)Z9_eMDsUgu#r1emV2=9t^9gSZsG{bFPatqC}c(8ip#A-}HEuPBuLs$6XM z!c~=bRX#{ik;rXG1dvF38RfFCh7j@qZnxtc#4y(x$a({($-vB zQ`0-Z2{9Y5!4w^EL5?Q`9<@qKw67khgaKUhX;*9o&_(TncZdknjtjoZk}nxF@M#WP z(59a4Y4JxDRQhS=mjF@K?ylP?8{vYfwJq3!-{P2oh>z%NM9-4&^7d4H>!6{ZErWdg zV#L~ac?ZU70%Ve3z-hnfn;^YP#P7A0XQQtN(1p31+a?i@PH57L4A*3gk?z2 z$0x3Bmx6bV)KJXfo9LJAy7u+)oc6A6Y;o!7DqIH?-&h@1<^Fw%{p*dzuq~ET}IF%N+o+Z|whK-k?R4aOWKk{Qm z|MhErm~l64q?%$$W98Vi@IFPf%H$<}_PElPu*cOqnG*baUxW87P7%4$yH8#^AQ&xy z6vwI=8(V5W_G5$bSsIy}u8WtiIRgv!U}9mVH+Y9|3BuclEAHmTQs&_JIHa%YBJ6)1CPeIUE|K>j(?Yq=wxE zIpKV3s#l#+EOL8uNj)Y8t7(ye8|WMr5#k!}jV}VOGZeAa6^f{6T2IUkfH)Sq1*XqDe!Cw^E^aLQAteP+R zp6tgdTjtEIz17`)4q(43EoC_~WL{m__1h1_I&KXkwO~LG?u|spgJaOHBev+rrws>8{d&^-$u`w>l!&_==o7t zeGIW;IRFFgVuU}wL7V|HC_(8?SV{k1mhlgNdgpYa<(0j=JEEQ z_dm+9>!pxxd%nH2t-Mw$-o>G>@JpQNIQh2f2ZbfpS69v+p$-dGrNJAF@Y;=!yO`Ec z5e<4K*IzHXGL_BXYm+d$`1rhH37}&NRZ-w;0ANuV_1R7Z%L_@0%1o!e$zTbv0BsnE38aUY2G&(ZIGEHBe?ks2|mLMUUv`rmA{SGc9|7zW;|q8IXRWZhs_q zUkI4@@?x%rZ$~MLsS2htU#O}we!V9^p}wk=G%Fja=DtI>+}gD%)`DhadlYN20}(U$5Cclc?j-<`c$ zvlWMI+4bfq1)@O>bN5F0BX6r!Rzw6E$a1SBpyyeAdF88}W^m6g5Ix|fNRUde!%w)U zWG%zRKg)47%ps18k&7qrS_b-!=6P%wxovxX8PIw%I--0-7WTA@B#_hin48G=@;4v4 z&YuY6%8`&_Hat+^e8*yW78{iS(b*#8HdJ)h%6$2)h8~t zO!3(&wl1jeTA}B8RJNp&{w%4tI>bmSy4iNHWjHe{ij zdhJ$&cUY)U8<9s@mn_gt+?%5)f4j_c;bs|rpTc{h5V|u0mD=y#y$Nt$cDki3w;Eh5 zU1*8Y^!Hu+27j1;RV6TS37HMcIOC+)IFkG%?1k}r6TZU2){Wcew|@m#ryW7`%%ko| zw&)L2(;1VIk5gfK)KMTlySi)H_afgADmpY{NO8mcOPKfkD27RRB3}tO9bTgVRj>?y zFx+&V6#o=;&u~R}qr9P$Oe`G|BTZRkjCk(`+RnKuJhsySx_>^;uD;1Xjevy-ln?C9{?)qc|PDIPEehXErU{8Owz!j1=RQ9PEDUHX?#K1yK_WTq) zkX=&P6=mpmIv4UN!fxxET;Xoy9b$TR%jphU`#SpB_V$*u(ii8I20exQvi#ndsl=Dt zIyh;1%=)Z7;j@JUz04z&M6;XM*e3*;IR__o$6 zU-&DQVWf2)l!DdIZ>~1{zWaHzu7cCn-bZu|P|m0Qn&nGuS2C`$l-~?*Zbvw(X%Wqz z5O87|D)dR{K#>UG;#AKH*v3Rj)v(dIjhzG>Q#vT!9-E;r-k~Pf$k47XW`fIBhbtd; z?(+5N33ORsP-XS5kmH>ZPv#m1gRxSqF0<@|8#+(adF-P% zV4@qaP`E3*T4tW%h9s(oa6ZklZ!fb;?Zf5!#NYb!v^$}v7lTaJ41i#8IG1i~m$3cl zGsgI>`BAP3_Qmg&5r$+4h>Ljb)DgVnaZ=Scdd*n*2xEU%eGZ5|1IqEq@*aKtBR;`s zP7rJ8TC5xAlTJnsM2iM^adDZa`bJn}H;}MO^JI?>du!8B%1483=ki6~JwbrOs5xW- zj&5aD^hQ9fZaOQ(LLQ(Owm!W|TTBGGVM9Oc0I5@&6|YEaN9R)Hjg*=vqlvth3zRPk zXEPURtZ5=0<8+G*nCV^b_2MFE(CO&K;ZDHhD@=0#CX;#gp4rybsZ4mzycBNFXuKvV zX1UjS_HzYvtcIERbKQQ+XobU9vI_5ECwsO*qovo!PSXxYlV$Ea`I_SW@LrA6x$(Ez zwWca%7Ajtg#jG|BB>w8RCjA}gl2hM&-&`a-4ji@9HS;ky;sn|icqr2kkk5T#^S&lo z@A^u6GHr#S1~4@xF>6DgoeYYheE)&>Dt#%UjvujGJ<^tq@u_w0g*E137ld}E)#O2C zPK0}BSdI4Al!#C`?+Yzb?Dwcq6ziG1wWzwQPAosXVvtQj6JA)0^0d7Z_9Lv+R^d-@ z)n`}FNGDF8?yJriI1GiE_RKxr=vtzV$@cNVZdJT4IEwvn^s08SNqD(uFvGI6x&?7| zFoWlf&!eTtcz(cZz_F1M8I9C*xe)1V=I@RkfA}#p-o^&F^$g=>9%@&rdr3ub{cL@k zhB;}0Tr`%YpteozR>Hn$)9f?u5;eO=RB)UqpH+H3WH2&uSw*eDvJp$olV$F1mGvqd z1Io8wezB0#bCEcdEzY^4owa6b&K8hl(M!cj?LOSsSbL5N==mj)ZLv*uADokWAB_Yv zJXNb|V?Nk#l$2Rt?|{+-MD_}(TEclXj2@v3`k8{~?B*id2^OU=7ZnfU*;5xB6l`4a zzEdq_mYqywq}8@XDZ7eTUT&G0Icgv=?Bz2Y=`sAD2{6!=6okal`FbXD9O{pSenjM6 z<`NW?YSA_5>K*rO$8`+%Hj_A&Z@;0`1hs2*>pjb{mA10^Z(^w56oz}`rxn`_ms}hC z)a{~!9;3No4Pd{9pOa9+%Rx>L4Rf^$jWgt8E4aqzi1N|J%&&yt<*C?8gfB{ZWwFg# zGA7@$$qJrhi@5$^@+ezmEBA<|ez{_3l+;JUBX1d@f!qgT0&hq;hoJR^I;~mi^Q!#j zTZ{6eSHEww$DpZWWjKR*M%1X5g0QZq6C6&d~epGHWqNev4w z>ros=rT!2FU4ILLzurUd$No{7@NbV_ya@aT;nt+wf6H-+l7p8eqkXOWZ>fnSX7Ncg zWR zFaF*=hfj752R)E9)unM5;rc^wF)|!P^@GDYuK!CW@Ix1v4<*iH)g1qpF1>q@E-g~2 z{$-91+wr%y4GDrCa19sM`nga1p+$cMc(Z}{=Kr6QiF`ybcII$aDpxreUp2UR{MR@J zyg4xk&Axv)Wa1nkkS~(~Pv!jnNo};h(8)Qrp^St#iT`-10DX{N=;Z8f8)G`?Iq`*) z*RH~H#GVWBWVdEKJS03Vk{`^~ zCrKec<|<>rxSnA^NX2`Yk(|zPf*-B<-lUVnRe0}p{G|+h9Mi8+Wj{z9JfR87`99wk zJqd`$DtU26RJ$au^)1NopHqZUn!wVLYM+@y`q`U9@h9PX;2lVfGn9YyNXeziKrR<* z)25KT!!G$fN5zRrZk)6t79iF5RXpkUyB%Hy)UklJ*q2k4|D(fx&xZi=1BvgCzr=k0 zu)=|rwm(R!=q^dx{@WCP0!&J}!;~`LuOa>Yn8T|!-Gim62_RwraubN`6L7JQ-BSOR zCi_FH)?>h`s~a%rr#$@aKis7H0Ic8tb<_X4>Hq!Nv>y3^);_#)*m-tTjtd&OTmba8 zW+M(D?Zw8EzgY{@Ps?75*ORdf1U_CwvQqli$lk4bplo;aqbfhULr^rpH zkRA$-H3VzxmWFk>NQ#aHcT$#-La@bU?VS!O{!F{DPUkK@+D=^`otX}*XHnqbf1 zePehRUE99}q`UyNm@-#CH@w`|D_R7zXE6%h+jlihblv}+!viu>whR624WUsinBsk0 zVL2!qsr~#MOP+WDLB+xfbZM9N*pPHz=I-qSL47wwk_8H0Scy7rU?PBUN_@>(p8_hG zbs1d5)`eXZ{J88lX`HK)79)lSZg1%U=kXvJlv2om!;%%52-BN;t?2izr0}86*RH>i zvjM7*#`OG$cK5eE3Q+%{6m&iN!OevqE}M8bp`Ov5#^=`y(i*0HXKd!b5eahGNx#YC zGXQc-Mj@2f%YaZ&Xw74|yGr+09Hw=&d{EEQtRuP@D28kXya4rQb|aZ+PL~`+Bs1^o zUe&95R=>47s##$pXc)e%FL5@bHh99=q~*4WNY|nzfZe_ZO26&Il-K4W=3Ae$UNiyh zcN;*QIE9h4+uK1hxVm7OaRpx^LW_Xz=I&CPH5+4Zfvp^1@xERH3R3(Y9j){0+aL`2 z^faxW56N+R9BLJmB{|DXj~;a!^N65>^!NA*uhXP>b2shkxf}F*)cqm#uByO@s+LvF zZ|VObV;G>XoT^nw;>{=-@a&NS3hNhioUEj@>-M-cP~#gg?#?iTlH@t2)$jv(rv0HE zs?D`RK+D%B->cf}Z2uXLmQ?X05^5Jdk874^xZ$oNUHb&kF|x=|~^!l7O!vfOzAqfoch+a95?y z-BCbc_>47R!B!P&Z%}^ar~Um_hgaep!~h<`Q9%s(?vD?#5c1O8Qww9iaW1iJ>qRd^ zXDoJ*UG2)Sbzy@ID@3T7mq1eODH`SW+^fFrkvx8=qZXna%{w#X*qSt18N@&g2MJPM zca$N8OFIW3A4c7f`^qG_=Q?0@d=v<+AqQWcB(obov2d|g%)Jo(P#xM<)|;=EMo#z? zs@iUs>oKk9HUh2E zw-ALum(J?~BL`*XV+|e4L5^W!hTWzKu&*-dtc?K0ZdEPJGGMorfr0+bH$xz65oKp8 zr~cHYj#_Y?8YF1c%_^(sKej-z;g%vaMEXu9Jyk%jt>`XU?y_|{#E^fl@=}f~P+^bo zg#9z2FU~js8MVhRY@e< zb?x;K5GC)R_cn?H+4^*bFz)HCw_8wJ`WWG6T$hLkV1n&zD7lM@c4rLNh6musT!^8nEpFB~wA{!n z8yuQy#Ck1PcN%J;`$u>?^@oAg-V7E1EMil+ZN6L55KUR0c~h_{phs(zi1c0d%&F3U zpAdde-fT|gRGS%^>YFTHIS5wtk7(Zjp`LU-NIEF99m>d&^yg@_!V7e*#j3>RrLs-i zeeytksV%TG#l!DDk1gBk{2DB2;F$?a+ zFF1GJohL8Ye#b+m-+hQ;5ko$-BtKN9Jz)a{hpVN^sh_&@PM3>}n~wE~{k9n&wnz?C z={6&N1)|blfPYc9Jh|{aSya?{l5MWU6P=*H+)! zR;B*K>wq5nU&s7krp+HZ<|Ur!#bR-=#;wV}M!i;|cJPm??;g(GAHy851g2SuzRjDT z-n`YxMa`mA0Dog_lAYhSt6eh}yUR$*_%c;INGfb8y(>#61fGBXc96~Q0sDScoA8&& zjar=$XXy7ocN8a!X!w>shHw_E!RF{O6E7&ZmLpO|Yn8^P7RP*#Kt8yME`=^}`SbLh zT>L5(y*W4BnNV)?dig;eok#xZgmMeZahPc93(D&>E3QMqzx|?vtCwVi#4|x&laxO@ z;=@VuYjMpOc5dpkM<=&VMspcXR8-+3os`4_KGttdjCQO*{q8)OE$4slATBvtYbrBp z0uxz6_~8}i=-Yp-_aBq;WhG(B+}n0tfxEv#Ljk757Q_Oo%fxJqkx9{mBdHFNOvBY8 z>;Dz{ot8KvRX5!zmG(1vKmAm^G4M&A@vAEpeov0(DZdRoNDL+7n~__s>oY3N*0SkA zWVv=4xf-Qmc-t}%N-q{n@qMUUVM9CbXo04wFpTHkS_@Xo_kKdGiCNvvWKX*i&R`E@ zL(6lHGdwi>l46d6=a(jwqaFZP*tY5E=`D2Z#!@X2LP}rBo_FMHYR*8OeD%7oZ9)bV z|AMy8gfmutNuI%2NthoVEr4~}r`iJa+y`GjKEa`LG=+x@u^}+A(q=K- zg##2I0Hl&Sd60^3Dr>EX*bet1Y~idtwzkKZ&ynWo4RhQQ$N<{W7S|(g_1bgk*7Q>X zU?Bn?ag_9ScTTkzms&+0uEzAa4`!({(@W;Er9!go>)auU;|w;S(G?Va`(ICMD85u* z;pocF=~T;m;D~kXxr1F!w6__^fd*aM=j-K&jTvj7z`yFPyQhkmFcAeU`sMgDuRCHPyqjzx@RkqzKS>e%_Ii!nO0S9`wT!Ye> z*UqpR(JoorRP+5T$&lXV)}1+RHiLJM{SZ}_&bsxx4E8(oq4OtSf8{Y-y*pR{0CaZ5 zr4K8dW{5F)J|-c)+m|@p-oZjfq?ny4ajW&$cV=RXLPhkI^CYIO9GvA9QZg9lJ$iW7 zn$4Z%)fhAjwfo@t`sPy;-s9#no7!Dn<~OSN3;$!qfx`?qA2e&eRs8YsWr@j6dh=cO zh(Ng|t<1cE_Zpp78U!X9aJyHbwVS?J?byEBok{7JN{x?c3SPN=?o%4hRG`Hzk3dj6 zlOF+SIR827@#)?Xh_$}CH-v$ly&5L!>W}&ycJXa$Nq==dsTB+4l=YWr)eO1glG}qN zGabX1IJbGMKA9M|x-}oYy5lq;0mUCV{upwt-jLdlp|F7$9=_sILcv}l{UAmNh47s! zFXGWV?t-&3Pg*)kB>^TWC6}l`3(@WUQ?9{PBe}%;K1T$eBnFDgf)j!D1$V*O zs`Gj`vJw!53^}puUm%U45qka{rRk|dmg8PkMy;a8{N+zhU8%<>`)mxvTB1e!21XUW zP+T6+-M5D>FZU3g3?zTR(tAuO%eP5tmdRSw!6lU>ibw{coVE8Vf|Sg4M~g^ar_z#k zxv!qXA(OG=@!HB$EMId>vhcX!%Z@6cs~0e!B6XROkApn(T>>o;t>PoObmPY)$IxDH z`mQeQWKSlSy30sJFppG_q)5#4@aQ!Yp1MG=w0Hv%c@fD zPM@Vv%%eK_HISzInu@^N#0_;oI>Z<1*LB|8fLvS+%U(r8?ar=~7iWK-Xol%1m!|=L zwSP20D6u*_kwoX?lo=b}V>rp*_?7QQEB*b43Vfk}UQR zhmmG07rz_GMSSqxwhMq7JwNAF&Db{Iz@r@yH_xC@dX$6S;<`?G#Q~sKy3>MO)>JhM zFNV)6iO5n!7jlT*U-+PPd{aI(B z7XF!Ebe{%L5h>Ip_lzIcclqLm@a*JeN|Qt}%S(i&30*WXX$=jpdxJ=x3p`n4mzXv( zI{PJY>C2$O8Wyi#+5)8?USTcuQL{XMN5+pTb8J8r0a7bg-)MXwlE@YHyJ+D-v)za{i z^Fic&T0*q4zx)m~K@1TlpMVtpM&?GhNYkDkDb?uw?yoy%T;heyqDJvag?F2z!hBLK zCdOd;BEFU1IK_buRvCCOfc$Vb?{Aw_!*pvleYXH2bJ&`zep+Y?wR850U%s9tgM9K_ zZmCoAi&UMjsBpU=wQ=XSsY%IS*t=3ed@@;Qx)hsc%E7-^mO10VuyKW zA#-?^1pjCl3YQ&^H6{Mh2ufWPtYblYVghDFput4)AHHHy-$7oH*OT){ zUI$>+*pQj4s;XJ_ft<2(zWc-Y4f`=NpCAK-<{%qsSfENTug>?t5bK+1>H1H{49G3q zdx$uAlQe94)m2TB>CK1>*nGAdO@)^~DT-b=7q8_({`o`{0r5h&rt*Defrm#ai)T2D z-t`DX%7euQ8eNHH35_T{Fqx0?x*L8~}2AIOp7o88Y56Dk>u< zZxhA(;%2DuuAMPoRP08w=VPKLJWyX8rk_ip-(Mqoy1-`@w=m& zsXk^S$mdQ3($Q$AVwIDTDIP*^c2P&E7>9tvB+BMeRohZ)52YqY8JgsCTLn#CCK6~E z7lI@NvJU9A)&$zs_$tRrDJ1MH@?`nhY#_VPqot3!iq6@|UM~+KP*@4`8qFEZSz5t^ zA`!>m=xh0K?eZ;*mw1+HT*Gj7()FE;3uos)bAR&TIaT}R+7;g@tjEd(3^|j@nnHeu z@ulpIp`i<-7j88AA916_7)&#DF4XI8eICX0jYykI@ubD~gt~0KSqY4H?kceM8x(C> zMC6r!o2bg1lTjz7r5cT4@hs2N=V3k0Rr_K>)ltb+>saaOcT|e=Uba2Q%nGv%$?q;o zu96R`8@_YviG{B2jNs0r>^Me5m)}}!Trjw{IOX|mBK&RTH=WL@maTdbDH~nGD^wW! z$5f7Ez1AC<3e7hR1^W3>D>zkQaW2HY)U71@%HWs9a`wwJ+NH=%l<-g|$8hLhM+gW{ zoM9CA2N%JT;;T*%hLWPce|XU#AWq9nrib}!+rzj5IZ-F=nvIGSq~QGpYrev{Bj;J@ zFSM?znmrd(c=yS0#EtVNm#o%XIUA~a+J&Q`>z5pF^MUOyo zreJL9(k$omYF$s=hjCf%Mf4&p3&GeIxHOFP=HqA8+z;onM4E34B*~qUp)m9LI^fZe z*gB&#zH~OU_dZau+-cu+xE!;TILs!xbX8Ef zz5ekm(*K61RkIHr%ks>wmm|V}4uM8qIQvYp?ZTz`RzI{_v)=tm+JV#v63VyrW0jc!r-B*S6nAB=*-1;Q;6k6Wi$-DH1^%H$-Gj_znM2aTYUdhGEK zXm>=f=3vNWdNyhMrvmj$vAuc2rF~ej&B%+QpY|@@R9B8Zaq-%*A_iK-07b78XDcHr_ z@+<}TPhjLZt|1m={RS2F!tN$s$JL17l|yq<)QQ43%c=5D)UCO7W=?nvGkR&)JKl$d zt~lu$SGn8xmxYIIj*&yD5T%Q|UwR#)>n?MKzK&Qe{&>-NEWLC!Ocl*1|h&)CUi<%&p73jHRoai>{oDMT@;| zwe8j#RPP=Rg3C0Ita}g!`F#I$bFB$*Z*p)BW~Q^KOi(f(NiZ&+p3s5uaU|-pWqnGNgvDdclmY9WTQ!?vcgsAM#22G)Zdj@p#Fb1 zQ+vP4ri4O781p!$Kx z)}AuBbb1gSePwO*simZnL9wMi0&dIu*o=K)=;HHgkpx7wxv9WJ(+E|e+Nn3yOG+Ux z*6DC(#ELy4($}%r4e~-xdH)KF*7sa>HLkqb5xDa{-(hV!Ll1S>tOuRuYu4=5;%`1m zA-784x#!_O%-LJef8OV&)T}rClc;K%$2dJ3L!z#ghK|Hd>pG4WAKs)n=g3F zrYnw*Vhh=gb5?S^+Y)L5F(G-w3DHu?`3ZsIfV3E->-kv0^pHU*YIOm>jm1@Z)oJy& z-RCno>ShxLFc;yKpW8cD-cKL8PqlrrZ)J%YD={k8NP9@xgU^>jz*mNU#?7r9An7F7 zg?iIU^CJ-BdiV|bEpw{XGHGjIn~3g5skIKV?2~j)u!WCJyIr)8xGs7>+}7IqnBbgbbAX8jb|-sskIZU&*4qx=&0`vZ4s$|_@XTWpyd3TWzj2vzUz09 zuu=cb^{)!lW8Yeh6XuoWi2l+m%F?39%N{ThNPV<4Ww-o_x2yTug0ybybdi1A(EwM| zgd0lUHwI4hh4I5^$~m)&XH4Z9>@7kSWHK&DpT83w5sWN9tHkWMRggKmOv%!T;+KB> z9wR&d?J28Qi<%YDuz0@MqK%uqNT4Z*{%v}#`eRxLB^p}v%d(Z2sS=sT?GwR$Rw_9C z)nZ-LcQt7d%i}BNZEZieLIiqo^A7eRa%zM>3&(@3Z(Dn8)J(kBP*pAo;Hcv~NB%L0 zrIYDx`cUVL#1Xri25}ELkLkDNkw&WT)I65ETJ4);qukc&-$_B5V(hV9Ws$xE0UHw$ z2|Q|KRP3dM#EB#<7!lTN%RX(Y`HG-A(H~SCO<@a z)d11K_VTiy+u2@$-q``AZknnArKaQv>NLtrb#>L%%e1*l<4cvw5X_;r*BZ&QOb^5R zYMn*O6qs}g>kjRkYf5fG^($KX!FSpCp+obT&N%${9vo6LT58Pn2}UN_NKYbGwI1Dy z*x-gW0M#QQ;ZnLO?#Z8_Q*yq6S72Qm<_2M}cg{jN=E(f5z`zHp7k{xqe6RIL>>AOE zvlLIY%4PNJJHl#osru~W=%}5z9Q@f%&3A1Kp^3%XCyY`Pt@Ruqu)TTuCH+y6rV`ySmmBL0`A-LkSb-=kF ziBs}7;PGJ#Z04@V7jG?>u2oUXzM48?sOHVo;v>RvpK{G{NxTfY=pVFU@nE*A$a5){ zi#5=Oo2{{S&cHUKg_J8(3RV?b^{_@(oMDE!sypvwzO9^~fX^8)rCH{#G#KPvrDS0YziV`}KzK0E!jwvQ8Ckq+*A^uH9$*)K#BpZA z>)O*MKqFPbXYW%MFE6cFlBtWV+;Sh{><|2Y9gi$yO>`O^ZCJ4?l&yIMrDa4q)bZ6c zr4_N!vUMDF_bAdIjpdm~`_Zj9x#OG}yF5)qHuWv2?WG)d&6R8f-XQb6{=TK!-ElKv zukd#is!!N&mQ-FymfX_|aby8D9BhPI(!p6gzdyQ)E-l)s@O%GZIZjHuI5>8zTU^|W)JBk~yWWszOv7r>tEpy@3k$ZnnZG+&f zN4VfOc$g({y;gsU{li6YO^;|2vy(u7-AEsm!>r3+Y*puKwip%s96eWAtHM+uBr)UP zXwiD>+D0xP$wfLD>_1nNE$M|Z*YUxk)A?Y&g!99blgy;q0s?i0)tu)hCty_qv*i&} z$)6szb9br;!qybkkPT7dOIqm4r_Y&X1ygKjWu|?%$W6yZ65#cD=Cw z(PEda{%_Obx+pZP-Ea4CQQwNxyNRvd#~N%iGT-ceMRQhIi)0>u|M)(?#l7C)#rD&a z*LHYOg`0-@m^821hx5+a|I{kL^_8b5Cx=GHz1HL0`A<<~MZif+;O)HR&d$-K48a=s zXt^7_hULZnOXTB}y~0Bm;NLZsPlq49EB*$CxX-wM2>JQ=>|K#H`snKG{OU{3^Ouj3 z>;Ao17@zv)lm?M0Z56?HFl@6t$uq=}CzKC3c6{+MdNV%a?Mu#1^QCcXr-x4NdgzBp zj&e66OI8u7&#&cP+FPYCFZj2shK}mH64`vzi_?AnYaBK7GWS3H)IEx2{dHBHrO@_% z(7{4}717l2zlM{vXOi-Nq0isX%ac-&Lr?#2h#k-L{~j#@ZI7BB^aA)FY4;fE|F>)= z8dcE`4ixBemn+-4HkDJ6GYfi2kHtRo`B7u3c8{)1 zJ@ci>2NTr9;cZnl*?AXV$A&q>?yQltq$F~9S zVrgkvG-~|i<*h{=rGB^+=xl1rdYS@r{!A4+vv;=_4fAq ze(K!|D*oeG*X%-a)Dza`;`oqjSA7YeI~)#yi+B+6L`n4A=f$;k(7O`cj5UE6e4UA2 z{^EesTS;-7_23=wXnH<@FP{rZ^W@FcP?Zq$*iL5072P$8irw05Bn%Nz+pp{7zbmkp ziEeG;oUvQRRAJkjBZ3n#cz0)T(`Mr0g4W8Ic)3_C1UAsW&?dF8Ah#(8k3`gMXicrIEvjZiqFHc=+joqBkPhO*7eE*v3 z#@G~_thZ0}h55x+U#Go4f125KnHUI69NnIYZajD?@TyZRNPdFAF!=Jb8D@ zD<|;8*}%N5@5O3TGX$-Se{ABOow>lDwXMY-hGsa)v-?E{gWoYMk4j{ys_a@jA=4Sn zvvSR!zX*pcX^RxIXbsbZzP@Zd7)_kf-(lrk_s~|0u3I!1dP-WCsAN3YeD?2*4xW) zmGbqprqB|X-%Ra&Rp8?mo!2nj;cDh5BrPAYJO%xOBRQP)t5M}dOzAX)xT$7`dNs5f zG^DB=9{s@k?Ayf!W;%g7;}Drhg)nkW;sRt_a=@l)X@Ua2Ep84V?XFuf&VxwV&DR*a zRl7E7&*-xC4!E*@`RGT9IlmKDYu`fb=IGGI30>dP+}vWOyR*3fdRs_uv(fqFq5Ajk z;9^E6Y!EOfVT=HGYrQ4} zq2Pk(EyQ31Xb$rZHQz*>MlDG5#;*QAjk(^dtKfbX&MV^Hy}b_KK3g7qT1?VoT!Hk~6b-^*on1J&9$O806 zCXiwAvXKT?vrnsYgf_J?6DBk3kM&(@3!3*OMDM*rlgQub}8DJm9m&hU{aBA z=#_CEdZ06V;uoDMA2$iyst*Nm>pPB@JoYeX(-a1Ux42FST9@&9EYKrPq8}1-J6W*C z>>G-(slGuZYQ9m*N{zkrhs?xc>~_>vrUdM*E}_`|9zoV0d4V>;eQZHr4QSl2eZ&!V zBA(C~IP^y04d_)MSd#_r{_BcunlieQbEdBVw-UQnkHUJ0Kh=u0LeJn_du(fCUolS0 zGz!YC*+Q3xNmp=Ps}CA+ba({PQML5PLdDifnDvU=K#qyMWzVQ_NCtZ4cP-=yv`~kR z`qsV{5|Am8`RbSTLma;NX#(8KsSNIcg!)iws-1+#+0RO;D=+ z0q)h==GL>cI`!M8TqGHDNxP$w#=G;2XreqN?Xr(}DWBRk_NwPKe-f}XdP%jhj0&&O zSV3!yj4l+K&8ZS4D~JlnI`2bUGP{mX0(ZnF4XwotVMZ`_Z=8wgLTSz4vt*0F4-p0& zW?gYO38)d)Qpp+tj^ZE%0EDQB=?E8C<7+(;F zZy{W}v*n11+YrlvVljw5P~oRVj1<|lr{ToP?bI{4qaYXM9^{Y5EPUmn zMp!RS>2I#ZzUMEuRqFj@sz&vg)4>%RHc&(J82+w|xU@Z(`rJGv`~>HlCRPS1_xgoQ?|_UE6r8Dk%XF-4{EK zUWf^4i*2f9TCA8JL`zY>=eolXoxWCK# zNA4K-9fpvOjmrOXzB_4-LnO~H7^G%j#X8#jI>dqHj(Z0Vjt%X-nPCUt{JMF}_d(Po zZb99j{q*qj3n0F8A9{4pX#Y+tvdCNVbFFtWX0&(o z#!vsc|FGB9lOTz70rh3fm?w8SW&Hlr-ZvF-$6qqHCra&8vpEU4$u}H} z`mY^qkQeb-x>j~6T&g#4pIX8dfWC*0I9}O(q*=#H6Y9!s*`1YL5Y-oa@QfKr+n5k@ zt}%N6X^4v@F8)f8{lOw&+<*t5B0YP-1j8%UE_0``4Kpf(#n# zb@!AV^Y>qQ!3UhFP0aFAKJFar|LYS}iGK}ag-4~t{_C&6XBW-|?5OZjM(;N(bugk0 z^IcR-4Pit4-);&R)mJPVEJkd+SSlo2snIU zh#Ha+j4VI?w|_p$1@?hiW@9gdeVkmE*rSg*r0gviLmIB-yoN0gSBPlZg1K2o9M^qw z;WxoP4c^_FrIKj0%d!AGP-l?zv6(v^hE+aE?X5X@8%xWyygbJtQ+)Bs!c`f( zibjfxvRu)c?@sz)!byrhQmq2r;%+$`a7^@rJe6);Gxvp4hL|m-W;*U`5ed0a_lKTA z*s40m8Rg9bN&7i)O;*h4C(jKl5xuvzigN!jo+px~5T>o~CTKZSCL=@aPMjDlnVrsR z^ev+(#Xo0P<+}dz<`B5X++(PRP4753i=Fov&k0E~1puP~+o`6K{$x+Xj^5!-3Vj5l zpzLLjjmeL#!Dpl$HSD%yIQYUS+@QjU=w~YO2Oxf@eCG zb`{wfGR~W81ReWV_m+(&E<5<234{t)m2C%Z(pSC0vIzr+<_y&|>Aq0LMUucu?|&L- z-xCuDei}o}n@`JBBX^&kL_>`-5y;z5>67|1Q{unDBvti02Z*0yv4f;o*-NGkx7kXaakZzwQBagID+|aJ<1a&&#%usFicgUGfP}EQOyC#*sopl7MIq5cMNG zHH@^$p^Jr~Qa_-=ZobTcCTCy;(`wbEViV*%4g8cA*>yD}9!P%uhUs)K&=fXXOXW_Hz~KD)N#8DiWpYdoUH z{7ZTd-P(n`-Oa|#KO*qKi&KCGe8Oy1Frb!A2K9@OY>y81!ZRo zMXmZ*b8Wx-5D0J8-s6xSm)T7xByeNp%kWjNqpbko_@q`gF)KpY)(B?R^T}Mvwmzp` zzfky$BVlutS#3-*41+u4;h?SC4*ydBZtb)rDr6=h(>5qXX2B+N=#+9eV_(zFm3~)K zS+%G1IO&o1Z`(ct;PSF%Un_lx+{33E+Ub+|tZ0L)ky6f3sNB=;IYO~`sYtp0CzvB_ z(!bVXV<|Z}FGvdQe>tMuz);&^I@=t?0L-h_9oHl$pXV@-JvD53T+7h+l3k8NX1hO! zK%iH9FQ{|b`HD8WDpRkd^;-%ue5^_T6_gJqU&DHN=i*@BnI;Kv+fCiNH!(OUIEm(b&Sd6#82he`RQB@rz(>PEE; z3YLP>V-ccG_1(F!r4J6}&}W(k*if77%x7w(H8{Q2WM`vvBz51*8=ID~8J9W$9hL{1ofE4O#U*f%9=8Hp4Hul|lc+ zHqC{FuJZYYiO@DRe3pu|P4FDn7dY}w)*_bc1VS$e_()lmh*>kC8HEbRm)5q@IaiC4 ztr|nM*r;>3gad?m>!b#kkfT=u7dKYLEZ1K;^qw+JJ_;^R%8%Vn&zB~N(vnR#(UP2K z*RYRc;0G0z?+V^Mik^BCq8q8Lt-XK)H_wg5alpq%y51z=ehgmAzBSE)^P^KJ4y#nQ zo{DguCRnV_(N466Y}?T~<*EgfzYlhNvS}}o-?86Rq_nL z1;)$jd0i_DJb`XSR~|kZh00EZYP5C?t-@#}Ga*cF+|9Wp61x5zpN&)z7j=&XX+5kT zx<|$LkUG@&WT2xP&g*z)zYym?jz+|;0Ka3yx? zu@Goh7Go)^4BQjSRG!=rao3F<&L`(;=e9cM-a>!Aj%8?}bF*3Ua~wNyI60xVA+YGE zlJu%9dR&a2J{++mNX>9nUDxzZXW0Vwq*BPsB~Z3|Yl4)0-G)jP6inzypWVJS-?b#w z*1FiJ+`^KYvXD4hN_)F>A_JV&&CBcu@>~~p!XI74_H3d%``T)){Wu*u$kP*}c29Ss zKQR~t%7j!f0k?38y6f5cjq#Y}nZDMQa(jgqHzOU%5|=D?jxSO}8)XhN;;3Fox*rYq zwU>RKPRapD$b5zDn1)12;n?~xcQE_Nou}K~Tpj*H>$r7kYpUlLS6%*>mDKH#tM){p{Up%#kCP93XTQvpbkyF=zmkay!- z4g+Fc4Ak>YyfB&QSC=jOBRtpNvSaO^_GfXFcW$4qzdSyLzMJpnO{;eD(D&KT%b&Z- zIp9r=C3B7v?_CJ-N9`fOE2Z0SSerM0jIobFMNsHowkOMD%%G`p8NR_;D1WAD>mD*u z9Q7&CkRtKV=Lub!{ULX$93ZuFA+ifCA-HC4;wc9J%}6dxo(r7AW>~{-tdDh}zSb@T z%C{YF_vK&WxP7QKp6L!x-1sT_#YzV%EzYOrL6%(Zd{A#l%&_~pJ+JNpI3}dsw@(w; zbrP5*N!|twbB)=b5_{s{Df4MOjUIh&^w8!GUIapz(%&keQg~6L(B|^j=_e4 zwOQ9h9PSdHX=xB_{C$gTOEYxx+kK=14J#Aex4+=qVUCJsG&0UcUs23QGSaOFZSW{^ zp78S3YnCnFT1b=0nLdx`XzD$R`iS)%+J>_A^4AyhgMJIQAmxO6(1Ff;yf$WLL#MV| z?poUT2tE1)Kmk7ByB5z&>kw*qgA&E)b?d1;7Ve&Rnb4&M-%NA>I177Th8g!Z=km8J zg76v}XiH~{)m=}hw+bGH-&lNQgT1_jqP3DPavskex6Lgie}7&zGSt_Ze~L~E zTZTL)x8B6gAoFYtYZ)TpWZOb9?(Z$n$S+?M4X zvyc_)3|~j;1`R(KmzEW?2^S%hv+uG63tSF%bX+V8-N4Op!>SyzGyA{rANY&Mfzi8= z`$lr_V8X#`<2lc?GOwX;lp06QSw~0x94M*K@e`Mv2TMm$4W>Ag?>w_CM4mA{zoSWb z^p7`2#E=a;NH_p$bC~%1F1D0#_%N-nqC7uDaE&cX3p^~6g(mVSOw%i{*JK;HWF`g- za@}PUBngcm9g8rENCcSE!bDzd)*}&j4e8wAbp&SNNDV4N;~ep}>{z^?aaWf`M_t&N zYlv=BRokGuY>>uNj+qczGeQBOAQ-!5bn^qRh}!K_#i3JmUE_ib*cv7z>V*)t7}))%Gf+oBd(8JE!NMuUEbYyMmoQNtAHSuB*&+$}I^ zl79Of;bP!cYgy1a#>D5QSHf^Z`+eHC|>f-UDoPu^(aVQ06)&!@)HyPsq#|=yUS=>QzP@NXU>6A|27TY!B z74qb|o@=NMDtdhR0v4X`A=?g0M{vlkXW{FTiYRYtIKj9GahsCD$xTFJD*zFWgVKv! zb@%W`o)_tHq&&>{wy^g3igAdXEVYbdP5HLPpo7Q!@b)gSTerQ{zVVw3TO)ix33wQ5 zw-cl_D5FkRDo>ZgtSe~j^AySHuU@58V66J9#zuR-D0?}UI@jk zOf}&y{t>t_3T~2|T%fUqDEYJ9Y_Y1kky~&V!9v~q`MKruvOyM+a`pNL^oE${HJdj= zNcDOvS$D|32SgtPko@=Le#WwKnYRhK8sws%coFCR+fAA0-=l`C&JIF`q!@bqK5xYvr51E{L8q3pM9w|*wxnP>(-tp&4|j&M z)bHZcZZFcv$|pP2;zj%r!*YI|q;?bw@A52jzb(?GqoyGZ(1eA2pgHAkwbIg*Zmvi?i{E zQkeK7kEO(9zu(kylkxcWNa8&olcLsQ@1XAtP-Lp#1B&G8zI9?O>yFJBU-^(zT|Ft1 zt0lmXvwZ|5xcDy^iQIJ!d*}>iZZ72qLK>~zSgwTP z-aR4=`oj<=P?4cKWY>;f&ayjD zR_{CK^GFK+jA9XI7u8#L)rDS$Z?x>#bLY~!MyaQV>~g@&-3iQn_-ykG6h6% zYN1uVN8Iq0gZ(TmY}jUtl0dgl0*g!n(>D5H7>rF0E0I5Z2hpBAnkPO~)Cw?{AT=Ac z$X6%fKB_=@Y9({U<17R}gIXx14P_gL2GfYcPIMqvam}ftvHaiwIYVL+5^fif)yhp0 zTAHcBd}&(WYL+>ZAzs-TZBS?IdJyctuOc6U_>dQ|zGrTq(-fjsF*7+h%p1;hvmXns z3s_gJtR8z@GPV0Smn$-qoL!O{=lD=$!h;gx)aZ&!0npxX-$iY80@QDY^QQkd6lCpd zl6mK}#Kg3H6aQx~rT~l;1}#3iF4Z(M?{53Dt?|kwIjta?AOfs@YB#X=mX3Nn<=I*p-YozIiHX4^)%SpG2ahXzW=BCJZ>@fJK zMr>o%yP#8-@)dY(;*0?#Tc}CTu7^kV9h8@?^9K3++bDPhRdTS%F}Head*Ta}vIwqr zo(CCm`J{~h4&>uJyX&7jp^ml&xkA1bw&LHg@4-UnFN7?27aDr(gmtLyZykCBSIDjP zg*2ca$v&X`$TV|W#vm9eC%0m>ii2z^P^GfDj#oO|X?e>-;=6MsXHdW>xi#?3mY7Yk zyW>mv#uogMBD7{o_$2)`2qLAUMpz?zhKpLZ+r;B&;)!)T>+1Ilv`4P*LEPiJ0mOZ5 z&_M6OmD#uEZ7C@fLNE`>55gb=_h&ItM{{L@d8yW;wq($w0^i@rc{XtHF&eE8iht@y z7~*TO$6_F+v1Q1B78_ImR?Lu8<*9J8&T%)w48Va-ot*SW(_Q z>>=I0HP%%gUa&QIqAP_N>{;YKczbQcQu^i$t6zH)j)uAubFXocE|#mgCpVk`LKRW3&7u&R>0cM;Lc7>l5cm?i@yShWj$QeJy(VhJQNb2ft+1T^{R@^&O@y*H z)hP2F^S#U@E8hI>*)@}<`W2K(A5N#$}o$Rtn5gu8x*k{%?P7CstWo#nY z%h)M{+^hQdpeU;Or&q=-L2 z5OjvrbD;!Z1gQ}=(qFUN_Mt1;{8Sz;1c8x9+dI{cygyQ!fR;9(WD>Jjp3coCm*l)N zk)}9IJ94|e_MTkny&^T~`$$>i>r^D2=}-;I8_Q3Mp2+Kehb*KpQ9nsUC`t?9t^JQ~ z(mqu*pnF@fl2o1dMflddjGtknr0+?fxS5X&m3A>Kb~ZznN+wD2GO*zwmmd+%{Co`;E zC(EQA<0JD)sK_m%IPxRsiPvq`4A#d2V{m`E#mdmd(rO&J-VrlZ1TAXj4*x@eki-l7 zIYF-~A&{_{R*BoC^)4KjgT9oqZfhiN?+N*%US7wcPmTr^=`@@NxE>G+fFUO(rpD7n3uJ=1;H^n6;AiWv{WT)Q{m?8ud2=r_C>mbzSJS+mAEb#D;>+AWv6sQ}Vms`!RM@#+wEEw2h+a?t z9)_uyDC{rc=T+{cN#vWr5fkZ=eqaNC&+rknL3YdH0?Th{ufNl1Fo0MjOdAXCpIU#P zNL&|q2~?wJ7k2UYzy0$=^Cv(rQ+A7ku-xwiLhrmJvd-uCU-rN8LW1mt%!bw;DB_=k zA7mKE^T12G?qA*yy!@9(2%z+0qNi5--^qmnFF_~gaQypPcetcUw)6jlT!P>wVqyC! ztN%8T%+9VYXNUj3UHjiOTX@l;2V3`vQSsODm#4)ps6+_I`8o(6aBDf4lO?ndwRPGs zW!3ougWTVd^s6AXeWI1-!!D~e@BoPZ!t-ZiB@UXu;boX}z`k+x<$_KJV49~N8Gt$# zr6p|*3yb;I&hSyW`5|=q=zkz8%}Mggx7W(&KXRYIJWpl0g;u^mvA3aKRh&Rz1*TQ^ z$%OviQ#1oFafxp`@ct$y^Z@x{b`}tnRLo~Lt0%vBESCYE z^iHb81={Q$gb&ys|2dsmlisFs`&EwKcY8EqX4rv?WJ&HB=ke$Av}G=THXnsV{kAcu zwJOf0p6I{w;rj2KyZ6m~kpI4yv3w7-w*D_n14Jgy3$VrihS;Uf|7dZbZ~n&@|FkCm zS_i?T#8#SMq8}ypf_*%Nhmp1Yw{=g2QqsAMHXjpk1SU z*OX+7&dmH&_!l+i*krZXcdF78p}EsNDQ%$2V8c;BYX6A1Z!Ns8(+an1CFXj=c0I1P zV~XQB0NqOH&oOPy&(L`xq~d(EaG3n`_rVanygSpVRQx~X2B4Wf0Ds`a;Im1!9di`M z;5Sl}kb;@)@X`G6C+P>mUli5O9A)(L1HUXtJ`5P82UaqUJFz6My5rT+LKa<_nNPPK zL{}J)S37f-u^jjW`-U$y2RKM|87Z^76nNy7ZCs0$(7WJv=z^*N4Ve)#anl+gRo1A8LlM^_FZrIc}w@brJd#fZ-~Dg z&Hp!OF|KN`IyYk8`u-NEi%Kd{5!jDk{=FYZ9vM`hWgi;(n+vGCDk4OTTU$RhHJJ9I z-JFYYvr^QRk7AzvXVZ3PJigBcu9L%5~v&9{NTITMVaPPaZ(Ryc6G-$^&{-!~^N< zj<jN%iF@oKlz1bGDd*?+bD@mk}_(f(3%jVG2C86Jb6bYNyn zzTDqN{uHYhY0oUEIT~=SwKV_n^)=rh|F5$gp#|9|HfPhlgBt018ksYrDP^Uz3F7Ut zvh&r(GO~W`m!E1yf?R4^MxLM&Z(C>Q!@h#WC~}uC!DkQ!J!Kv`0Y!?Fm~l`ppHr=f z=enM|{qeemg+)N@U}!hqkLx@x%WNaxa-dTNwqC8c0QzUDMTs+w+!L=2`CgN6tA|1_ zYiC~brne2=0shuFhGH=5HOtxhp*c*q1fqlujZTZf?)R2iy0s+!Xm-D7KUTKQ?S4a{hJkbnMD2)P;!w)OL|f}8zY)0Fa1Mq#QUw? z_`Pe@3LE4iJI!~fsAmBreWdzr08;EbMrF~0knXac#?#iw>3wA?{gYIBl%%6}B#f9_ z`m`akV)s`S&k`b`7;3!kiBk&FnUh@r(_hyXKwq zl>b^}O5hBrq=5n?3@#dUz~nUA(omB{?kIOo>0akx@~3RBtCgG%z4!X8JJWl1!~9Lz zvwlXz(1;r;M0XKi6f8BzRD7+(eyeL+yd!hCDAJvUM|J#mide^`uWb8M!P=rZRS?w)bhL4 zt0q11`bG|my`kOpRmj%ywWmfxz*EbmLzufui3z<6O zm`;7w2pvSs9uhe*+e+(&M|E-lOUO7E*Do{AN#10_Uk`sd^VdaY|v9eqZ z5h9eZ z=iBgwhVx^vJuOc|a|KO)uTDZC!S^eu_ipnW;cD$q(5xMANwi<3L(A?IEDa$Pe9+ni z8`t#l6Q4jEvIM5DKO6`x6s>QSAa6iDRgwwFAc0&V<;Y2HpE+#+LG;ugiaG~L-4Fv< zH;2u6p3ZPZ52!ZLi0#SK*?w2P^8rJJwhB;ques@u^%`Gh63_+5bb-d0VhmCruj0*1 z%RAtRn>hoUu3k1#|76+MnQJvrESeN?G==geXN2F1h(2h>Rr?Yu2-?}`Qr32Advn;^ zYti>L2*J`6Lqc;1HDb$;3dbWv?K{)$R&uRodKu8q6J&2Np$(9(>uLSZQ_AzPUevxi zu2!g@g*vWhr{zN1|j2W!bQa8}wl&epZacCkJGKKxI;g4_;fJ?3~FChUByk zrR&3RU$#=Y8Bm@E?=Lh|ki>6NfXvg?TkQZWyo628>tZgR0%d{?v0VL+V9P~((g`om zT*}7`b7xAtbeeq{+s6H}j_oS8EE}$`XUrBZq2#9#ev&_+SkMBcBk+;hVcmkIRzlax z*6JXOweRVQ0+@Z$DV+haUGG4HLQyR&XPWQYi*?^^?1D=S2)q*uL1{_vsC39pS|E+I zGM9>@n#=O54hsHi;Xm6pUYfIxsyi^ZXQ zF_!X>&y|0hr#%()0`!3sOjS>Mu%BlhFR%E)6`Vf9`nDdTRhYV5L{y=aV8nt#%3Dox z1?l;EWG4edWP}7IAKQYidQ5iY8s+->gNUk_u#IYAgMl#6$wpwwP`&7>f(YxQQ>Aw@ zDG+r_dg%PJPHmn3dv0BeQJtM##r<)p%*5KObU5QE~3mycl(cg?`3`avndo! zsHkl4?(@@pue!%a@*yOaX&JRk*g8!oywFfG0;-K<^Bk|twVWz5IOjvq&N48RTrT`> z(YLhJU{MpzhZb_|2u5CvQQpc)ApOPx@o=1>$#t-mA$2!T(3YkuI*Jo7ekRi2L}4T~^-tTs>o_S+5rk%HpM2D&LDR|#4V4@j^U6Sx6qdA)jL zHkfwm$Vaeen6c#|y9C*m65cWQiKpFqdTz!#vwh=MCNe*@p98*KJZD@jM&0~Y*KS0M z48})u==P&z)tFAh;0q?p@IU^bIeSa->isY~S6tUl2Gz|UD^mN-I7If15i%?%bZQEr z(l2%6PLYZ;idHMsq(!8*eN2WA5oV8S?~=L}uwG<%oxwgs(M%(}imsHSQ2`sEdvwp^XiluiG?INCAs{>E z>~FVzO1bq5fQBqhbz=Qz62+SjReO)mM0XFVx>)!P{%AHk2*e=K})gJ(SYBnhhTWZEMJWcbEy>Gofd3q_p zBKDFqG;48c>iZ1s!-o4;7RIq z5Zq zi_%~8l{Ze+)%TQGlW|bXy1=(aEbzwL;A-0of>3u|D5ID9T3P@2IOnGSNWBsA-K9op ze&6dYTz`U|bag5V7SS7C8}ODXjuOdf=|9+EQ^ig*k?#2>7^F&KfZ>;96$@&lWPt)@ zB_rl@s$4#nTdY{;k=n1j#!Jrp)5Qq~cvTHjHr^Lc80^RFT-Z0t^)P?{g zL_(dj-w#%hpWm4uwD@Jts72}=4B3uYhZW>GmHKq%d(@^5r4T)~h}or!xOGdnDedy6 z9hf>{&NYn@8a6em9SbZAYjJhX6>GKprLKq8^c!qc1y@i9G1L9yQ{>$^ld)%SC1t(L zt{?8&4eu(b#kt%(L;ayIz<2R&O4IF+`tBbLGesuz2-AzLkXiIc!$}5(z#ul6WM;aj zU7nRn9^A%J;2ii~#tkZ%-`8W@^d>^svpw~0p5;_XTiZrNIkyi}AWh=Ov&o)Az)LC!>o!|kWwpAQz_{s2jvI* z+OBz=#OJgHf$^T)u2*eCPL_bUi9>Sg#aw>&ZgoP%e>g5>F0RB_5&v=rUtdsMwQxqN z7Z*N}mE|Q%=*&xB00jrdF%&8N=b;pDFE0i^{55x;(UM;yKNl!orrKdU zVs>TY{-fb{DY18e%OM-YA}5=nS0cF+lZqHh$g4xVpCwyAM`K@H4b({+_cg?->aU@i zpUdmXv;$5ToGf=L&*7I@_)2QN9PK)h>blt0ZMzdPOMgHv>MO6Kqtjbtd_%k3P>;4) z`Za!gm1>ny^u|WXPbo*r2O!g;ZuzKdw^2b5Kh4N(oe!WAd~HG5919Xg9)T}v;FxpO?DpHp`p!{!@ZvnjqskK zH*TMEeIb^424gmzZAHf@gXO*XH$K;1boqvQjsA75pFvpaxH$5|uHzYYIG{NxDdVeF z-CUA;uddSzJ~_>`(;YD&_lX@CLSFph%$e_uUrB%ba|Tni7}t}9ux`g7{61L%&w%e;Kv$Rmf0c=cXc{FWvbMO9>t zp&c~;wV0TOSqsp%0w#I-Z%O*)LvuA)>3J)6#J`R~nGUL94P?&zYYQqfBfy>X;$_Oe z#%%s(;Uhc6m08H(e@&2yW&=|BvkWEeLj-@~{@vXr1)F+GOYO!#Te<$^-bQMHG;?(j zzV+v@z<-|c^4ckSPESevIk@@{dZ>USj!47RrKxbj$?@ zrJ4UR;1$E*$sQ0Kowu%r{E%#vIj@; z)r#}eTXWW+{y7NV+i|QS(v#4a(^#)eR?O0Uvc$# zF2jhj&gpP%pMqq=45cOnrLtX{k)QsU4sS z$xEnJ4*JaU`{ap$CDyG?)pmXX4LE4aa&i3gC4iv{%J0uyY~PLF{G>|aY{3=>^<93l zt1_Qo$SMd7L<&9sFg^S$WxXEL^Byg<&YyBZ|N7H^c>E9(P<>a@NPojF9K5sv^pr5~ z40j&*w=pXI+TrTx_Cx=wULOPXDr$i#vHETK2O?Clqxw`Q|N6Ig$bSOrH8KyYC;s=$ zfB7Ih3{>AC9!L3qdq>0mZr u53@of(*AYO;RhA-3&o{_|2l6WVVFF-?ID}v$G|K8xTU13n0d|M+5Z7g4T@s` diff --git a/images/dashboards/delete-data-source.png b/images/dashboards/delete-data-source.png deleted file mode 100644 index 2d0337a92b8555a3e1dcf37e571a271f4639e9f1..0000000000000000000000000000000000000000 GIT binary patch literal 0 HcmV?d00001 literal 88006 zcmeFZg8Oi`nYHPP~m!bLzj z@#Gs<*<|1wf4{kklGf{r-dhe;r0gbWi3fEqA7hSpQu`I59pBizT9gVwGyu8U(EAY% zx_2LQS&R_5*WOE`AA?`pXJ+)(?Dk0wLbW|hcn%&FL(KL`!F|^L^*XhP68ADJY+!c`5l3jBP?iLg7$6LY3tjn3!ZOqX|w>dD+ZGF^tWjS?OSm zDMQ73-KJt0geg^VzXiRjz;vuBib+GN7VbR@g6~FVNc%jEe8OO8AzqNsqj6&HrV?b> zTzOZsLe(JmQMn%I3i5A6j1~#gcK-y-=g-yYQ!~q4>2eVnYVbDFw!qM z*(fBZD1#T^-oJPZltDs47=UMR5BiF9YKW-4Glyl$;2wcXY2S?#diEA>FyM-#kM87a zqv`Kg963I&d)>n!C%fGYzn-q{q3$w1h|{R_4I&6~a3L^-b@!4PzhV!x2qE0`Zo(iz zu7s{?*9O(R9b^2r84L=Y{Jcri6HE+htaut6j`Z+T`FQGL7l zkzHh1ixc5&?DgTzqR3vzA*vDKNi8)u>2gR$x1!NrRgV^>k?}wkO&OX1#XvwQq}tdC z%{4-^Hy+hV;^{ZfZ_}MrCUAw*vGZfjAtSD0QfDr-xxZ<+T0)}!plJ`Qf6UO!vy5PN z{d9dj=olZvgvv^uPvO51auzN#(#{Z}ItFg51+4Sw;@vwI!GbTz z3~88E?3?bzeC?3Ynr2@3B3D5*fQ4DQV6T0lFkgIo`$7h;*oY`nn6ScuJ_WA8Z}BT` zLIAZ9wH7M3@0gLmclb+xDe$XwsKRb|FtQ%3%f`Vz467)sE21zZl9-qU5-hvuJ1oXt z=udC8LP@bBB`8j07~-N#$&Dp~--+*%?F@35GS?s%zr7?wjKvw`%Y$tW5)^HTT^OXr z6)!GA{}`#0+oOy(EJZmf;mGtQNKqnUs`)2{BbUzSV+q1ZoS*RL1WREZqIYb*#U}14 zuXjVM+0gL<@;{`e;HLQ{et42lL zlevS~u1oiX@fx-b|4HgARwNQSoE=OuTX>9QiUb|C8fF7FI;LWG`>dp^G!^zmD7S=C z6TWH~UvF`5P_I+3`{pNO#1pLQXbFnfF+?&Qd9Sde)npi`%gK+Zh$;1HoMMHGic3>V znu~&p@{7af9_l~nYYbOCGGKVI3**D#ZU67 zRP5<$=}YOI5>yjf6L<~|4t5U=4n7@FTxcJZ&bgFksr!6=j4e^9FSIW*DdISwIN%1J zG9M5Ud^S@+{hB|8Vi{qb(#iz18&$4CZ3{^X z%*zMnMrL%&8a02ltQC^?i+>jn6vul}Ola>m=<$$!3wZk(!|Ss%^XGdy}?>!J|Ftm4|BUyjP?SsU_x{ga3eO%U@s zb89ShoR1_BJ_sQZz8=vEkruuZt%KBnn053Tw;nXcg?q6XicH zgFS+kFzh~mm7hC{Wr(QqR~jc<=~w7^;bDbWquSGr^T9 zV0D~pDCSeloS!TwZ#R0D+W0|O*zirKmmkER3$-H=d3ZcoAXziPM|;rF^>G2Kv*NMd zM|n5rvX3NUft8#Ei?z99)@T0`q%#tpETltH-}V9CL2@y?k#V3>Q1gdI^@o?BL~HfE zgTXlkE2ae|?K6VVT;d3;DIIN_t8sP#)+3aq zIki2l#$toY4o{x%%*zj+O1GI156a7iLzAP@WxM*$1E?)gZP;nH{Om^d*6LdN`r<94 z>+U48tX)^7D<6A~qi+?X$0;%VN$P~&xw`IQJxxUnz(;pFj(LfBsGQ86TaIxin}@WDx3CJ4 zKD0rCw)^%nr$y=?%iJyR?4Jywed;fw4vyTZz2=_We-H-nEU&e++w4aiwLY;tt?pfc zNM8tBKMmfx?y3$K7G}|Uk$Y)B%pF^uHHO-`+@0OsQ47*5oiO@#hntzf{B|dVs#Ak{ z_1jbGb-CaZJxY#XEd%DYl9rp48ZHLZ!N!w^9>#rZGj!K6^Y3f;3$A`BOnF$cDb%kO z6#-5gWd*#N!>Cx9u-w63V}lb=PX%-n=_0)k4yfBDwi|nwhMqj0psLW5UcjOV2WiQg zDJVcO0Ov?haL@!$@W2@~unR*I{^z+AG#wP|U-x04ph7I6;Qn1k5jZ~oVu1bmoqru+ z;IrJm&4C<|_xU4L2R5h^&fovTt>>P=RY^H!4$Y2?5 z2Ph~!>gOF=R)y*}@cji#H7!Rig?IcWb~Y>@Ozn(8EUq@-=kGuXy7B|3HXz3jWUe;W zwhsKRLKJ_M;0Mm1Z?jU6{Z+)#N{B*B;XRqSojr(*n}v;qjY1fOjEqds-qeg=MMCP| zuLD;?6c&z-V18Cs7Z(>67fu#CdvjKHK0ZEHHV#$}4rZVPvxA$h;|Et}TL;R2edIsB zBLQ+Sv9|;}TH4u?J%9Iuv7M8n5Cz3^MgRHx7o8wi%m1p$*5TjH0vgEr{DhUAg^l$; zzYV-9_2H+V`hcFjAo8VvN|F0+iRpWnqQ|rIpWar`j|GoO(9{r!M zsyl$}#qDf>Ivs`otH1tT?|*;z?-vDGpPT;QSn)4D|8*B&v@nVw>wk8dFiH!G`VG*J zM3xdtYQPbYvcG;$z+XCGe?9_x84p1`kuwyOD3q+kTQyhc!wlrF3VoylK9^qjLH8g= zQ(F1?P_Zzk^63()mvSKdMS3}M=7ySr_k)U;859ikxHTY!A0q3`yD1)P?N_TuLcfI8 z>3^jQm^Ym9IW*j4uG__dM!kEeKafFV|C22g#m^LXBj;OePz)6E|GxRsh(bpZ`@zZh z{_k6kj4zBH?uQz>e?-Vvr~zS77^;7;DN~6;vxKv0oWp?PV3t+GU`IYAry&}kG&m0a?NJsF{`ihq`t7+0jpTjmoY#)bqq8M03}p}{YS;G23^5x=|td8qkW1&Y>t6H{xS znl22k+4IN(y|8hB)#P{*SLW&^BY_48KbEi=&EIsU-!&OnwgkhKTA0C@_+PRV-gP=2 zFuVR+#2pyFwi17)Sri62ql=ol3oQ>Sc@hI>SsI2a#u16hM;#eFMAt=~K{+f${x?-a z^8mtQOL4F8T*D_hK(VB6f`0sqhX465_ySo^1x61|r%V3mW) zvnU=HfOmfq#0>&K#T*j|{#*fJllheb)vc)aO$`xon@$)G6=r8+AH(CQj$eKWp z_HSC`NA;{8u?34(sDV1*6o3nVVL@x6zf)=wia@FD7uqn-g#yKZ3%a_Lq(2ko3x`b( z@Gb23*XkjFOJQF`{a#I`&qKs2$C!ZfgQ1Nf zgKvtXmLE2c7mj|p+WGXzSg4p0XV;m*+9UlUd21kg;0x^%ayULKdHu)vg!QYVcoBjw>>MmbDmHpfBXx2BMgO4vPWKV!f73nz|CUSlYRJfIeh@#*0z%Mr51{1!RX0BsxNZ-!hb5+D;u zq@3~3)4(7cb{o_}yR=IxyGcu}iRE-&tXL3AsZUUzW3kHg`)1A21p4^ehm~fBQ;Sk* z(}5^rMLoTW7q8gf6L4Fk7{T#UA{l4>$Z?o2)1JO67q;IWeT^u7d%mA9_4&*6!IXxI z9Q&IO2X)2MrJ71-@$!E|7S*%lVDrVSpaGI6Ax<`h4DJ}5Z_bT6KU|cr0gV*!h`YL9 zHgnh5@5mVrH2cY#Kd^plwNX#ml{S$R+Dy-mK@;$+qJ5OiUX%%V>X z8csMJ|LtNrnlc4F<3Ra0LbCZRBqzk24Jm)AFd1wZBJ{UQb(Wx|x_y-mANX@DzmYx$ z^-?;p#Gt^ z`Q{XV)A(LVpz+`M0dn;cW_(EIKg%TvHk&s6XbGGXZ3u>|pzu4RRT>F$c#>&>)EM?@ zNEkT9VQu(QG`GAT%jA+i-J7T!8vSyS*!I{DiB&~;|1IzYC;we42eOzi;YGJa^TXw` zlU3<&^CbL4dndt|beiacG> zYtLN4t~TLXs>si+9pRXXFEU_**HEpS@Hj=o}35AYt+30 z=Ip$l{?voS*rZyhh#<+NS+NqwV@Ja0__OM8y_2;_P_^(~ym4Q65OY_MYGG%vLaLx7 zF!r)}k}KaUKHlt+`p8m74G7vV)-WVz4X=W#$|x${M8eoM{XppMZ1=bt2F5>3QjCC$ z?M^snqTE%o5li=bSiP?WO34OML_*ja-m{1&ye9qcY8>{^g+dIAY>fUqDgBlttAKCv zQOb&a7~vW8OHAATCbE}7lQ)(rS;6QPv0bUF9#{D8M=xz)qM;dU5~t5&e&DwEAb zT-r7q?<6sxP@2yaC*U2vS7mct=2i=5?~TQ4E;V==e4H)%F-mGqyW(Tic@DWsV$!bZ zjY=~3*wLJ3%Rq;jO?I6xmng@g*F@I&>u!~i*qceO>HQ=E&bRWJZHzikmtGGQKx1X( zU8S|>9;?)t4#iF1%q_r;;L@x7blV;EbQp*xO_q(P8BW0%81#91*lSE}Ty;)!{Iw=N zp2^2Jru*hBZG7!FpnSI1JboqSN=M0+4v^II>|e}B*C&~crH$Zon*B5Z%uZ!dnCdox4~;ZB#RGrE<{nZ$9LD@xb(33mtWueEm$7b)kO{>TYZB)O6c z!=}!&O171|W;*WMK3uG|-{^wgztbSqZgnmfNIN6cNF(NVOu!q{m1%L@cO>K-B~2a+o2iM@7enGcTP zBCt%mwm(e2(>5T}VLS|}s^-&{6f=ROj=3*Gy26CeUAo|tr@({9)v^9`Uh$Q;0J6BA zpUj*446LXod^!Q90TQ2_dh6weG4mQ>uiIg-&%b8o_4DPD&r|z}Ed%?`_9o<;PeH3` zJEg4VvfcjhJFUMu;`f$>867%PY28k@!(uC}7FamWX9pvb;GIIZOvJDKi9& zFVh!KgY7}?%Bd5}-FWUtu2;8BpSDalzCs^cP~Vy*SMGT-r*eS8tH+3|T~Cbn@0UuS zHT@OM;~!UMep<-AiBK+&v#+&9ev7k~Hl8;-$V3?j?TBz`TrXU%o>t1mH;Wq3->7lQ zRz$wWmeX2utFlGeHIyHPS=kpsAL(xtnY$B%X30KhY3L#KGrEOHg0>y@H>qV~`G4rK z-XAG_y{4k?`Vg{Eu3yO4~=fTcY-sp6Sd^>)7|I$Nk9Y4{lnFIzTToe$H#Mv zu^ijg77oWjHX(B|?9J~A8T^vFV_AhOmLxF@>J0LU!|Hk753O}hl+#AKz#Lpv(i6FF zC^MBKdBM;0xI3Q4Xd-g|{_?Rw6SdVs+yOL)aDx3Ib7`$wRl42waWqp%8l|Ws80sno z77;yW&PrnhSn!lW20Xp5)|S8TmtqLrf89vo_@9T*^C|%tL2|y!e4mnt)^2lW(sdIP zbsN&`?7g2{F4ONjOv|6GBb9um!mS_v$D5rbtUC15MbOSWu?IN>I0Dt?oh0CigeL@9 zXE(jE__y2V-JNi@ES#&Z;$;D+Pnp7u<`|@NIYz+DCck8z>O`!VJBzYZU$WQA&!9hU zt#TPOlGzwXd!{X1cDGT%5m2*@IuedSQh7<5fV%fIhv$O-GJwl2YWCtT1NJzMpjGX{ z*f1_%hUI(mY^pHQ<{=dtO5oa=F zGP#%EtmxP5FQ1>zpeEr$*^=ZU#7&G_M2XEesfix~D+?GEv(I@d<;qAgGe3>0(w9!0 ze|&lfoA`-CtDg4B`zLqlS+}Mrzj6-lt=Oqr=GN_cszYX5OB9h}+t4qFUO<(TREAPY zA-fk#AGwtgY^?n7-TKh_)9l5!xH-tmo0rW==Fvn858N5{zfkZioio4Rmadw480xG4 zbfB+jYKEX(xx~HP5Wcy_DSn@wvkR=nCho%Zn~>b-$DHg-R*etir3deY~s$)d~oPv!l^+7Kq4 zwn0U_bN;{*_ltx5b4&`|I_rv6id%-R2BZKw^}=If_Fer5K=z%)wEB8_A;ZQtmxtQy zU&6LJz}*zmvpV(eiFKA|?_NN-O-IxDY9nur>VRoU;@E0h>+G zx#FW@Qw9wFmluCr?qH((HDPYB`F0je+$lQvJK?)n%%I<#obmA+E?!0tpJe$Y3pph5 z?x-Ej8K~qblrtHz3|F3M40c~E*T2k=jj4a{&1J5u&n%P58;XOtU=!~pzt9H`N~K@F zus;hxt2YWfw^}E7N%@(6zy{65qUY-LZn|Esiq|;K0=iuE;bUC87C9};;liBfII$2p1R?T7x4t?UwL!2G zz8GyIg7556Z_9?vw*g;`h1Ef`ZBE{lug$Fx+hD<>dn6&^rVW5D|gV=X|nx- z&A(L<@AMeY?t4zJ)*foEmJ%Rro;b{l@9S-nH4UX(!{3w_PNW$DgJE~&rM%pf*@cjR!;PIM_^gD@pHx=E7qVK~ z_{PS)Q)(s$gnIKMw03C8S(0be(JM&JgR=6({G73HHA3Fif_=<~)#7wS!>sFWOXV_O3PEykr+oUercOZa4T8GbTe7L~xD{Mgc~PTz zT++W(@j~`?71qP|fuEZDJDy`zl^sD2HE}F?ceyr9p8uL4}T}xX4O0{pN(-Crls@h`UW7ujRDIIBYZ%V7(utSO1F*F^FO$_y z=UNoB_vgS3(&E0Y73a+Mt|N+tqn=#q#AqLX@;x@RJ*BR1>3{WhYCb`yMhgDEo*u5p z*y&xXhZ2zz{ z%_7UHXx*@r^>VO1uGUmppXISj%n?#FWB^lAt6aX#p!&|%`)jr;Mxe?ltYt))vpe&b zbwk$q$M74}BkopsUM;qMM4r~8*fLse=n9+Vy?R2~E$z}dB<2uUsYM`%cTT)yp`EV;GVh6J5qww@@K6744GA8pQ1Ur{|I zFlG`N8@Xv`{?`Q8l`SrY2D2sDLmK9?kME@pm|kC;Ho`O3=c6}>3$~s_4KH16y#>W0 zpb`cWmL$Y}yPBJWfF&Dm^mjhXX>)ZGpdO<~A3dsHvaWw^+?rs<;KD}Ys`|<2@H13Z zI1pLu6M0vk$vnkV{8sDLa05o+0q?!v`}qBS$j;B`PF_%s{!0Ooqr6X*M9UYY0PZ;! z7JQ*oQPa0`zg}qi>;4xJqoA&@_@>v$yj6r~Xna*o0N2I`5SImNVq^Q7luCg@Zh1@8 z@%7?jQn>}RKH~7uSElTZX}0gD_stZJy4l@hbb-^g!!N0s&};D-=@!;CP}Dy{n=SvO zcPI8C3j}rU(C{_?QmR*y&u}H;Arc9+Kmr#SY2oGAfYrlInm>G}jry5DBTqv;JeM$= z%2~A4B(2MZXXliK4T_g7dKjBEXZ10x)6up5!mE|ELzlfPre;gwW7sBzvZktQ^aF9E z#tF7!G%eISx(>75`|)G!jDgm-mmb&Qdv7SaQcq(s9$<5lZSplVv>!ogd(i%Odg6vN z8m_}a^y6rhgW2iWQ2EDikPi&#y0o)?MZah!#F5ZL%D|I=lDnrB{zZJl(HPs+RzB@b9p5vj$ag!uQAQDy z00fj;aw(Dkb(KWOV?BM=@^A|wv2;|*{aUjC;5;?LL=`h@PA*vkd&3)0NO9;FWQy(!hWAj&THsjH8!q2qv_XRk7q!zRL2jU6@@&1+_+0(-|tq>X5(ziGaSFUTR6BE6u`Rx7n7ZYuB+LpFQz|yKQxlg`Lwn+^Ojqju< zziAIkRt;ZS+mjqX8OPh$7(eY_8n*vsFG{j4SyjGx=z!*w=vffNiJ!&H(UXIVmxBb> zZrLrzIOpFSX}?e5Oy02M!q}9e{z$Q1ZeY+{QAV6QmY?PEo&*0k_lxP*EKgzOol7aOeyr+$0h)-%2zGa%#-;+VZir{|GZ{5c6|YjbS&_rALqx}gh;%w*PZJQ>Uj@d z$;zfaFvUto5ygI~uSj8OU2gJRL>WqAPHQk`Qf-LTF#yvdU$=hR>Yr|$(B=PJ*|EXu zaJ*K>R#v5Y9D3J2L+kQLpnyXfo!kEI`Sg~bR9&iSAyS~%r zpjer$DrmT$7SZ!r8K6jbGz+h}g7#{#Z)`Aln9l1t#pn99?9DYm>TlB*w%*Vcx$fLq z4NQ*)V(&SB6tZ?W?cy@6Y%7AQHFWH1Cb)D*zxDe>WhK-pM@jHkI8I?i(T$d5i6I9znd>v}@5SO^DKTZjR26oO71 zCvY&!t=^|)yVRtuk}sEk1%HS8@#pvKT*1d{9Dd!Rvj&^x7X7PycG9RRhm&kfj}F26 zwN{tPNTM49nX_|pQrJHLQBcUeQe-S}O@KvVaTQ!d2>bZz8iVGV(1^z^nhrIaltmXQ zmDgt^e3+l*2n1<@E#;=y^e-jq9?Ou{%@!&tn^Z%h>S|{0SlRjBX=nSXaht*LThuh9 zD)hDjBB1C=O+A>WQNf}Bju38uLX6F!~4!J4b%QzAvchBMYB8FJICy~H^qC}x{Ey~?V*Dl6CzBhr|Xz6p~r3V zLr-n6>htXVkQQ<}#F`$)SVr*5jUM2Cj2c!2X*bG*2{r^#?^Xv8Dg}zd5HXQWVPK;0 zC?5+Yj>rUx?k#l|P7EKc{Q~3w>S}~ZJ;|QyweIAZaEZ5#Zt8BOuB7(2H4-)}>xVhR zw5@pEE@Gq)CV=UR@ZuF68lh{4nZ@L9+nb}tbx&hIkFLE;D@LAm&)VegyNsXe;F@G0 z=W`A3x5%O{R9RljC!Kpz``uH{wSh5xF@w=x(_9?P%|HO83}sQr`Y?o47QjnqR}wXM zg~UH-s`@T>9C@^hjA$}x4%}mvNH@?%-ANwh*_qRD=^rzyg3;sMN#$-y zu1fJIl-<&pu@2c#lWXQ~`x@AzE@#ABF5>k-{q@gzS@-ctqpQ&)jxKke6!^6cDFJY zuF3rqWmg|9&s9h8oxl(Z*wBkl9tDr7s5yaNm%m@oJcY1a`YJPyTpS^U*j5aj+T?a- zwpizuYrRw_JFx)25QG$q0-h?Fu>|94+_ujt9XIb8=wn}YS~qiinQKxSV2tRpU(z=z zp4OCTF)Af`1CCkoa)Het!UKby!oc-#yiJ*J2Bw5qGi_-DO1BC;i7{>T03kYP3}@x(4Q9#-Xmj15 zRma8}s?wcmDhHINHtdf+4toQ3ja=IK6%(@Iv&Ej&7yMBvcbE=jU%;{onsAI_CYD+) zN8)ieK5pl`*Vr2y#xE(z#MqL`fd5Hs2pOEWYW^iTSUn_5|vzVF=6Y#vQ^32nd7bD{Hd2GXpPKZC^n10Yc6AB}LmJ>OiKG#_zK^cAk6PK*HlThm#@T==to@)aG90%RxaxekF% zpfC(P zNJt}?opoDDvTlHK0Y5lD11XB;WpVE$eYmOWjdfXWmx&LKC ztI9Ik&(7g#cJ*7wmX-In%R2?VTHkYQ9$VBC8T(5V2}TuMPdZUsqMIeBP%4Fs_r8f6 zk7@%hm5B^bvrIvW-ejndo0rPT$U6dccXJ6RdHnx!X5U?S*z z$O&Nd>LBL}^Wd!ysXX8=vJ9{O$o*-+oxT5W_xXT&r82XrW~eHSQtW1*yoD9N79h%i zu9)p~8sTOhw*B7pmc*kh(Nbk{Aztw6|DYXSKyC-#pqNj>}4;(2KR> zy8#|ci?P%%>kX7Ys0?6&7bB~HYpM;29=`1+b(_lsz=oj|+QxJdH#xF5ugTFiiI`09 z(W;h%j941m*S+^7K&%=vx(oT^32zneTa>hgZQ@T^8|Q6_SB-W2T1x>uTzmWbar=_W z2(B?I_?U0WK9RnlygosdVMw_xhUjUZXOY`1e2AfHbXdJ2Ft3v(4KMPN)v$7}t>^I? zHKg${AtT;QAg!`eOGS70;b9pXG}JI2)(Fq5o9;U@40BN`e~LS_qXhbd@TngTfO4Qv z$Y(<8qrINRSRURjr-1j(Pnz;%~xEc9P%UUF#^lx9R0%5Fs!F{fOE)Nt5Z3- z*5iuBa=I?;d6hU_EiO%rm>gDx{(>6t94Q)2Ih}6H7OT|P#9xf@+{wh>efhcG07{SV z8VIj02yik6Ja;B}Yy)?b+2Z!n8B6R9X`xP53-W-wbGtVT$KIo!(U3IdsG8Y^M!(fL zHr3F5^rOiL{;~5Y;nm~dRleLyC0&!{PB7$y@E-slzdR*CGI?Wq?llEy#_d;jZ*q`h zVObb{3{R~?wQ34e2R;Z|DM#M)AHL}qdolkZZ9Rjm@RGESXFv;(7V8xun@qfh!$3^J zl5^`qruKx%stf6M!%Jt6RT~>~yB_|f`knT$aZkhJquoa4uD9QHmVbp`#L8|NEWr@1 z;C;ItM#`XJzHFQnatYMS9I{Jl;QAU?n~XU+%s|fQ#t=QyN4d_J9JqBKogqfnUadjh zij*|~gWN@<9_%>Y5k%2N{*)T)1lzw7ntc0BJP#WQVg-|X(R@@dJPaFsQXX?5Q#NJ5=V3Cp7 zl5%nx8SgUmNJuooji+0~s1B`KHso6OPlL z33&jwdBydi*GXUU_I{zDa2gPfQNaX>>en#^mazLXW`koq>t&EqA5E{t&Zl8~%4%*g z>cM2ztkDOvOvM(5Jw?m;`$wZ1$@Knn((^6w?p8z-G}9j=zou|P0mln?y!d9r+puOn zHGTvtU=EAP%?K-A5QIW(gHeT;ZX$?_4Lkf*O+fzf@NdY zSm^VXLC2+xx>AbMVu)p~wr06jq{BPg+MjB_>`>_mG3+D z?udZ1Q#x&5G8cS5Id;-=c>uuru9`78HhRD=q<(WxCf)FBUc)o7wVo@j{_D+Ff?Zp0 zDu5?W5%F{iC>M%*S8@J0&+-|HIx*zOvkc{NSbj@}RxhYpq5bX4uZ89jV)D`6o-wKw zCF2f$0pI-&5QO+D?>hn(-m&13bDAI+vDx7p6W`;R9gWXlRrX+Ne5b5cY=Y02*@pESr z!JO5g49GHw&F9f`fAR#>-eG#8GL*_$xUsuZr3?$0S&aB18C?@u6Et2d)bdHQy34}r z^H#?-{htZt*M;r&ZOTdQm>~;>?Dd!GM7`7S>nsV@i#7W}_nk7aCot^aB;Q^*6RAiBT5n$(*rk-@D4=`E?ap~)X3Tm9Z6_0~g~ zFTr#pYv`q1Aw zYu=UjPSwejM(JBa4?t^t0561kg_?;z@p@c`HoWWgbe6)rRF`IZ7}FFMIPT=!JMSn| z_8P{zIfP4I_<$8`k|C%rCw+Ypdsrg%?Dfl{EUq<8c*9#b=Y~m_F5oQosQSQGegL7L zxfO|eytVpTGuF`C#XN)r4nlbuHI(%xNQ- zq~%QDK@9Tx{fMZNEhb5PB=387ovV~*EID!crT*h{+~gs}dAi9Ikj)5lLFsmi%i*); z+5C75*||R1I@N+`oSsdr8QL_RrK}IOg1g0=kBe}tb?^SnRAfN z#};XV)tN!lv(_@h%8nPWIn?Y#JTA)`MC)*p>6FDtX?w@gM3nfQ4~^_Dbp}pu4@$M4 zBkCMI^H}JrE7Q1b<8bK(4VSt&F!0>71I;rnDUS>-DrvgKqr{7JBHus06*#_q=w@s9Xc(Sj4de}j*p zveJld=F)qKP9P}LA5FTacyK}3)*tyeiG>+WEI>17rFJG~!qkD;Y&J&Vc3Mo^*03y6 zqFzcvx$a>Y5f$ol(sx+{o)QxX6P5xOBVipHUd$oxhV)c}5s4P);|paT?{6{`04tcT z=)$D@83J`oeJZY$`788iX`9{9nW09bM6H4Xh~tO@C?5Qc)pL>LCU-cRMq6p{bTUA? zlO00(!XgoYyjn4Xe_SuH3O$|8JI8vxLm|`dstr$}oS|LAu6YNqo5=sV#_Vi&48UeF zJZ$#<959L$3@C#FU;{ zF$E5YV48Di&ERwBmvCdYaq&h(CyH@q6=VfsHSMQyRgxSl9o^N*XvjrJ{eAs6#Q6SGMXH*+x3KI-0T@!UJ93E`%;49^jmbE) zikNALuY@O0q21)h`Vdagq3>>fQ(_D?*pQ&URYfYBFbXnzb$_#$wNy{5$NMFQ&%tbe zI=}QyQ>&i?leK0NO@{X6FA-4!u5+!K2LK8=GJziS$f$N9sf%2nZ6V%CMfWGsO6cmvK4BDd0YUH^uZ_jzC}%Y}*z?D2yG zI>E0AEUI!2EkA+@x6>;3`2ATWR$_m3(&Es(`~Ag^&FNrv$^SPy^{t=76ZC`nmlGXN zM%98y7W_LN_fdQGE3ks*=OXQ{4uCReAhc%(hh_2PiBlHWgd5j`96$RE?R_f0L@ zNoCuTEFP1VQBGSDCf0x#|61uoO2~^Rqtrxp6Des|@&4kO0=Y^9q4wb_A5o}45b<>i z6ok8}OhUU~l8R}I#Pc^ah+0$fNOi9W8T|c8Iadm`Z24UmEE1ers9Di^9rW^4qk+Y` zqAt{z^PAJ1nTj{H0CtwM?SBf3hJ_&?8b>#RK2xX^Pa@<&_ej4zwq`fnezgYaLo+5T z!|Z+;Ih`-RXG+R#T@?9XQFfNb_-Bv~1&B${o;22ZUOfgm6SC8e=*Jn+X|s^Iic67>;_(OUo;e6bIXHfniaM(pQ(OtuC@@=2_|*Hi-Ej z0)jb=x5NhnAhu)EQt9lINA{)XOb(Ui;%cwQ-k_VsZjw zv0!-*Z~3q;CHnUTk3sye&Zo7gc@5W|;11^JU_tWmFM*iwKb8`b?1GX02yOnr1oAV` zxptqEB>a9OQ^`QH+^-@t(=xS|XjYO3CESSx0^A#_$(IN9^rL0JbmZeII{;v~BIZ60 zvr#h0Liv?_d8p}B1?PM55But^8a78IIC#3nEEBPa+17WLM>Q5FIiibRwgHolSF0}a z)wGjQh*sDwM;O8(tm8HlOYC+X}+& ze4cN>rk1l<$bd)1i<3VPX=2yU+pP$IH)ZPvOYym#?k1u>>AydElH|5sqSI}(eTkG& z4If&dUZR$mX4j$3Lht8x)Zzrh%^G();Pz{fH2x7W^aau-(iH+Ae@#DEa21x&EXTlq zPx*sI_E#20(=CuLrT<{k^;as0?(>E2K$?gAAF(GOJqSn$b3r37bwmO4$c9{gr7VMh;xy>b(1#b^wVVF~Gzk;X3)<^f}=FIn^Xb*wb3@kJt(@{q#Pk znjA)^_&-yUBMV%-v8-YGI|c*gfJC0IfNwNFIu?~BkO32S{$pqS?{c$zk$~(mn7`D3 zAy9rZna7Iq?^0%5X`!Gi+@BQVnA~byE7M-a0UYeAm)e-_ldrk2?~Y^AzFt@{eO@3kH3SE*bw>q|qksJXsV`m>+`9Te`1y&@;-9kPP_+|VWhI2Qf3Jh;X!d!q}Q#YWT5*lTeSw_8{nRQC|8n#e`F zCPyykP^ag9`tuej3bjf2s4A~ak#0aC0L)~*gL%(`Y=Z)EmrWuBvTp9${8d0PUxe>7uN2gX1mhxkYQ0eI9Yv&m->7 zL|sz=fj>_kNtw8~8=kSof2C!?NpI9l7T}SL@N9>_^PHicga?7IQ~x$y6dT=-5r$}| zG0;KnJ2pT^uXI=J(s}q9<0f%7>ToDl4b)!(^EZO}*P|veAO-Cup5EV$j{p3IU+8lx z+W(To_+OY(h z_PU6&D40z-jyC4|vDb6T+<1mzd|YEvLIS<&V05dqERd114TRXspAImvI8O>cQc8Ew z0uJM_h>yn1XD0($F+gV6AQ12j0%G~A3(WoV6v(pANAeW-uG{}V_TD?F$@bkAruQmh z5NV2tf;5p%0I?tvdmct#wmly>?sZ(nVhX)u4O_2T6h!shIh0|A9y?nN`lX3)Jmrq&dNPzoJVdKK zaWR^lpW~@Pg3lx3`%kh0qnCx9P_n;MBr*VMi7JQR1|jD?m!anY|JDMy75yn%`j*pb zt!}62V9nRM!P+<|&OhN9_Th^w1Xo5|g#c%<>fP@Lmw=lt4n%R)YR|795uh~5auW4S z+i^fwzsPo`H$q`^x^ow7}of-|$TP%VEOxoXF))JNRJn@%|ZR=YYCuEZG?=+#$C@j_xhKo_TT)@_LOvM1za?xNE;e5Sz90ENKu>&Pe!l) zyx|m^VB)U{zc|Njh<)Svs&JKpWGmw_8U0s)^E6Tm0P57p=vW!TSeIcRjvJ!Chf7}L zbpO7fsVpf5aPnTkPkhSH+|D^|#tO}T(Hhn`SFZHLsjS=0XX}5f7*l?UgoYx77AiRldhySR7#S$W zQ|lx;&XBo&%Qy#^w>@IcP;d>;s zH21b=rD#BIvFSbi{T;0A_Up*x@k6PYOwXU=4GQ>?xLEnew;48Cj<7T=sJ#zJjP!S7 zE7!k{nO&;F7^x)9m5zN{LX1>(#NM*WZBX`1<((MUh@*_9acoMLb-!u>0Ln z>C~&tcGatkilp5Knl~BxAN>+Yseke5>b+1RBP}{w>W>eCa%bh2m*uW-{?YRrV_bEN zyZebB0FYN*OJb=bNVx0q;QO;X`MwLWzhBqL9Lisuz22JF=lu@0aD|W%(Ux|1ACA@B z6C~e4PL%OG(fcatlJ-`p!QRCJ!O_OTRF#FUprn$@Kk8i&|C6UE2ft_=onVtrBF6X+ z4D;`4uHVm4KQ}VmT?%PKKms14?EEtB8#if9x9<7{&(lL9smFw%c~(ZdB%M;QE-;6S zw+VS$_sjoWO6MTbWL^ZqzuWw6hDGbI+Cl04$qTSG$w`35GaLW?D3163$YK>GhPz=M z(VmcyXlDu(VoxJ_S(x=06ZP2rS3S|w0&OV#__V9Vc!LMkPa--@K&%kNMBILo3oxk7 zubzLg^O7n97%!=kE{grj)nY&4C;Vi8ECBOo(M>NCoXkt?_EO;kVVhRbtKaVrZ@3RH z^$~_QIhA;&lp57Ocpt&$*I~GGWDBrG2lQs*WWA~Gz0R*mr+3Dm9*Xqr*Q0;w?i|gm z?n1&tT${trgS^FoR~JSNYXm#=QiVbJG5x42lj8ogD z+()6Dy|9OhnNo|vQ?Q*GSw~)Tro~_>fX^oXr=#BbYxe$fwEQU;_R@b_$RrbcR>eCZ z_hvt|`TL1>o-6ZIgH6n!d4>=~?w(OYMX7PKP5{>;jH80?2rxFEO_7}^+M9JRsw{U{ z@maxb`?Gsl>7YCE3;ZK@cVQjaS# z#T7Ka#CaNFI_1n24o;CUkWoQ!>p-|G=c5HhPQyl5ZJK9C@Fqm0WSu)w$9rRm{LoTQ zXU?@(pvn<1eS$-`&FQ~6`-vx?0G*@c^P^1Hk^o9T=-Yu<{lmH1n>$LMpOIc?6s+fp zh!)eoJ;<|5MSv#gFlbN0eMj34)N7PJ%5mA*C%-jP)i1~M(>H^0QMhf~*b{*bf8I^r zL?7lW|8U!**tZSIt~?)_-FZ-%4x}L)cj{&ieDyOu z7eiemDQaP)7!$?;?;WJqTdieUAaDQaxP>l{vnkrDaPZTTTyqnWcFi->mtmjWN3FhB zC^!AJ3&#)fdDk>n#6532WvT3PmGg9g9oNRcXmcQL1r<^`J7n1LJIwgNJ+|?d=*+p! zc3wZ^M0D>bw9dxwF?kOtC6MaCs{9J1o%)FGYGa;F~3VDt85a;^xxmd#i9}|?m1_G!onD@EA^(r-LeDy4vg9qStVT!S^ zw)1cC^fezu5(5+Ju(2t(Y1X=)(i3eZidxkryZyvyJJ=H$0nojbE)jFma&veYnD;ZtLNtU*8p9W~k+7_YZ85 z10uj_*>^Rk(mc-e)%POe?%lAXow3yG{Hd~fLC=TsR6vK>UzDiZExAqNIUNo^Tx))r zst-T0l}AvD{;!dRV9Cg0x%Qfl1Qby?%-2ve($Z{ z1uX*(H8^vzf-k-%S*z#9i~gH1-l>8|JB|41!~cpclOown?d?!XPu{U1!6+Z-{5AxZ+QK6f-9 zFOIkR(%X0hh_0VKF&sz-^EHwNJp}cvh1`41a&<|mS-2#Yn@Hngr!qGTxvR?4n z6pDg7>d~xGjn~)3^axx8CCly0qKf$kvxcKTYT)P|T7?!$N%c#z@YHD&`sUn*ruo>SD|1jBBbl5k zMc7QD2iN@-^?Pecxz8PRo{yj;L&8e286oo}ps8T4lX-tA8$4Wo`O1z8P75 zdfG+Oh3%JfdhzHmsEe3hc7v_tfrUzM#13R^P`Tl~PWh|fnJ)2_#?8(SL;*ER++v8G z3bS;nnpC1ODiju2F&{eQ>#BAh_?7Xl0p+1<0;{wZ!^=o1!a&qSbTR$1$cu`=I+Q8`)ej;#%V?~QT4n=0wn8aYvI zBDtOht}toy2SYhXe8FOV^O}e?`fmS1Z0|OcoHG(A`?{CYA2qcphI#{-H<4L4@+*sq zjULj@LNJBz!M)(n6U;hN+<1Paw*4f?iY;_!rXIO&9n-$Oj_*%?(N=#-Eyp5?VEo|Y zWwPro7(U|oTIuc}zlGK&PUd@8aK0Y~wwLMe6WtKAUlHS#&bQbo zN>AEbDxlQGv6`Jex(gMU#C|dzuyZ>P4#(42bJDYQmv2E?gQF=WVS8*qmEa0fUP90S zQsXpBH!C3^bib6Pr<%XCQ>JZNf43zP2WNMtpKdUVLDf47DXvYDHj)20N{tesWeRRx zUpBAB&O-;K5MMY4CA|FoHdhn;{O`ZnTKx7O@kF`Ur5|Gn6*Dyaxwyn1E2YL2`mho9 z(MLlj7ZPvQ81`V-)!Q)d({vAQ2(q4^|1QagYscZR-weWB+IhAXX&m&c$DIQ$nSX9x z)8Sz8Q&ebl<_r}wyUEqTp08|IXHKbEHf|pMM%!LO**eCh;A00vtS2PqCs2=-N7lg4 z{p#Q3Vg7_2ung6btK#6}N^-buQQnfJpad!!gKTIaf(FjD6vZxcppqY>T=p6FuBGls z)TNNuxar+mfyvK1?MWc^8m3uOaJYL4W2w+CSM;N|SaHwh?dA7mnW_vI;v|oabfduYtHj@zwgJ4lxASM-KI}pkEM5;n;w6`N_@g$JF-k(P_&lK8>Tf8b+ zt$mfT8Tvgx?w5u#3^~@2LP4b-==r=gW1H;yDd?>sv1hzTd*o2D$)@e|Q1K+YPg4HG zd|q@JvLNQ%d%tA(eVcALvblYJ35a8E+j# zIP@5^nDLFjNSsS`=G^2x z`E>Y!iu(lKzSz}pPK8$OR%nkyPi)bv7CjfRE-TsI_U>W`eK9>n)7D*_fn9+Ja!rnH z(~VMYx-vsq1HOsVlORf6qPC;abKqK>!U$qF-_C$$N8wXCC>BD^F zpRnh@Y;cuWFNV%Pwmw$zCcT?|f0%LG0cCYNt^Kgu$my#P>!8n4s%Ji}b#mhx`K5Jx z7Uiej+@y5f!}~Zg(CwE$xdNM9OC{+Gh{a9gLWJ5%C9&Kao9P38_AvV3-Jr-eVe?av zfze~bogmLqaW}up@9u~L%2)}Pktdr7v_Qdz#*D)j#j2DUKo_7SVIq-rdPLCS_BxZH zr#cKe_b3VF8`1$5?@Y7~;QcS8GK$|nJgQ7@%K%L%|KI5#=v?tGIW1QhDeMzq~co+OIwdsyeKt1w* zEgN0YCLEwp^`hv~ZSEiwtq48v+F%3w#D3NxuJY=5v@xAma?Z+zte^>ECL=QPZa1J7 zdXQ*GM?XaY)Z)KAEE;_toj#XFsOrl>rq^!fJWt{$Mds3;K>b4Rju`>NQaHPOp-0V_ zkc!t{pf>q;axADM->BODLeUq0Mb3Iip}Up_4o}7P%Yr8FyCbhVWXHNZ}9%%9T7-~Snbu>fJeF=59>rr zJ61Z=pyACyY7$SOF0;*^LaiJ{1mTKwcXbt6NIDY|?)Zr;Q(FbPULv5eW$@FcV9cNX zdFTpI-W?T+JUU_O6Z;wKBotECi&D3C1As7mgs5B&jV`X#O3B)2Gs$(*X-kuyO1g;A z1QQ=kS|SeZZ0u+-2pS6bsJ|lkI0zk~fXG%pd9E-^IIQhv%^mf0MLGP+>zz8X5S?RT zB4R_9jV^aignDYPakKj{f-g4ApAcJI@kHT6S->>TnjZcg1j|#2xA{tinzMHy%BC&l z>2Cr1Soxh%zT*rceMsU?f;%The7#h6U=v}{ZGGlG2lBfL%lZ5~gtGxIKe~1u>bP`H zW=H=a7lr%Sc(mwM>zkEkjwh@fJ|Hv<6?1P8SbQnF_1-zMse?2`cmv3tUHg7VVC|%O zt(sf`5T{r&0(}{j8(2FI?-F3UNpRTwxF$nox(KtQ|6>->&bn21@~3I4%vX1vBYYD-)^w}ihy^cWD8qi7@-=fC zh*PD=h*hiefJ`&(TB-ys?K|*O)~;?2qx#;iL&x@Z!l=~;Yy2TRH%YE~xmu5He7gIm zx5H8KQlK|S^0Z!RgqU6t)3 z6WWsJ44B;@o&okyoic+>D^3iZ4VK%Ueho~rIhzH5lxnb7zb|w57o_{Ys_wvyrL-=@ zI<5Jak>&saZUyd91yo=R`$=KFwto)JF}Ww1a7w?i?pI@*fpB)1&0LxrFcZd^L%0=p z!qvRm$~3K)v_c16pwJv0E!7)?cKU|1OY=ZbX>h9|6;ktR>HiOu9M(F&*a zT7=bLdd}Q#<0zTWO<_oIBYCSvdD18;ePU%}b=8d7ME_#8OxseB7fs*{rma2fni1Vi z)0E1Vkj0?J--|xv#CB^>**5Jv?p-A;w}NpzaF~90HS2M;Wq+;}eYEKt>hwtpGcqDt zqS>}YkRkihGQCmRXU=t(O)}k#8{&>bMcbe#6>_-4qQXC8tdR>P_(L)sSpmSyR&SvwU#FQ&L!M6)rRv^7&_i&{ zDeX&qohO^<7UwHQCL(ORL^Qfj8La)naI6cw(9?tQ6w@-YucLGT>JjxBBH9O4%p_#l=W%Ky2CgzGo$W~9K7yZd%hXgmK?cKgNQX$kT6gvn;+y=ex zS9OAg`UptiN8P#?9(q+?Mdz=WA$^^z42l@b{_#w0^;VRegz)oF3^Au{qFddg5Hi~# z&o64x^xU)MqKEH$1KuDrj^w68g!X>)9s~g)pE7jyn(!FIhTNO-LFY0GYI~W)EMLhm7wu1T<`zWSSWwC=AA}6cKMN9d(?ITWPljknE8;~km6LB8in^d}Vteq? z*==(qlGSRfYs5KtoMe_V#q|E1uvdgHi=yOgCnM22i*orjxz8KcvI(KL+B;aU#IJb> z@3_cEB>E%eOnxE#$-P1IC$L$2c zdl5plSf4t|!>P`(B@;3XilF|_ap8A5bPGOeOw%!DSgrv;iMQ@v+5do12Jgef6{q$B67Lae&XYdDWAY4T-WPtmm{Ow5z5~tWNW#~ z!hh#A(b?8xjSj9ln73w60zZtL$4NVe*!=v4Z2O^kL z0&@+#@dx?io)O`IKcV8B*Hrr1$<@-I6|NQQ6@l(qNmlB-@H29^CDTBb@+Z{uS{lv# zKkoPf7q2|+HI^Td?tam4hn==dV_an(f&hwqS!+inWpgL(og^WeiLYsdC@7rojThxu z(LHFnIs-u_3*AT%Ff15$`$JBc? z*CJ!aq--)^zsy4k0z%a42y9PV=U`nfj{tDe`oI}`{+zgHU(&{4DxV?u51=El?BK&Oe^m&J#yaF(E8y^q$5DSObl)JnqsMlMl$H)Q@udDgt$0U68Cno-lA*LoGBt~fG zxyQTOI+6%A9I|p`InHveVqrbwub4GgL&|2%l%HOo!%r4{_f4uf_pGdaLep{ zGT(lBt#GOGBe&q9h5Kea=Y69NEo-ZthwRt(e!hNMICw4J{{CaAr*Qf0od=&yw~)kF zWY{OnkxFU{2N;gv2d1lQEeS;@<}PR~=t_^+Ee-kIal|Z+sn0j8Yv3?7wCcLpU%Rf9!S`J$XN@wo!_BcFbv)ZD42Iy(fQyocx$9(1PBl}vaj(4d! zqiKD86BBkFrw4fcf*uKUwaO5|(Q)z=C*U5F7^*F7Wsx0%49@ z3qB-aZg!^wYxf2&X3RVX~U|J%KqHB-#nYlPvRT33scW*WMO}QF} zRti=@)IQ&}LNE#t(#1F|mpmisyZ*NMdWd24c!>_up{`l!NxL*^ToN%Iq6>9YqcjGRUr<6yiUk&Uab3 z8YyQQRR9xW9sZyIs0v@6Z;V^sS%i(}2d~$e?^H?$`gi z9D9|A>JS`DMvJp?2RWJK`pdw)slwqLHVMl+^N5iJC`s@gGAP5@9Pj76P=Qa!#oS*g z-UnIGJek7JaopY4PX8FfQ!Ay5A72uhkKG$jsIIy3B0w=jnH>9s8st^_CX(NbW)^z7 zDVausDp~p|ZRBX#E@=S7^BoxH^6MQ`&hYQrfg)JOX7Ex?S6!VjG5#n1_NiK2aYYZf zXf40_je}EE8Y~aLy*lqWZI4x!QGL^dCFO2EX`)ZE%Y{Zw(fG~LSAvIXZ^X~Bd_Lr9 z@91Q7Q<>H|^MG2;Ar2?(sn_*|QZa%WH_nXJRc}D|D785DJIZdZEfB&y7RB>o^ zP$+VmbWL;3oZGG%!DqynaBsQNs1AJd*fu3?_(SDdNV@Yhdm$ZUkkSKxB%M#XG2a?~ z^f1^f6>AyQ3%HwK;#zqJ_BSBso@ks#<~EiobeE#MNS_V0fWFI`B>dBT7Yf%C+a zN36c4zjJ?>%9G(4O&ML6Is+_Y5c`WAkEfd`f`7B|T)AFJ?k(Is44}bDAU2&UGnOqH%G)Kh3+NrZoqez@Y0S5j-jh*1cjcwPq_r)~Q#YlSt0Ah6<@< zlmMhdxj(}~6J>_H5efGv{>+WtVd_f3dved7n{{_ummf#y#c-_p|R=2@L&v2b=x`A&BR(# z?mW;CAUupeS^L3W$)ppinmk{yo)_Fe`IV_VOZl~vgu>l{YtoT=H$X(QLpQr}O4lh$VeUIN zpI3*>x@vZ}*pUB(C;WJBr364Ep0C?+E~kK|33ZzX>%7$>ejP_v*p9#OKD{1ITE=^4 z4H7`Cl9==M*Ox5m5HKRyIJokK>)?WAL5B|)V_yQnzZ9PxjA_1u1I&JpC7LUJt0^81m7jGD5?o%e3wY{=b=Vme!e03V_8*?VVsk}d^=Be2U(j+??K@s&WK zZ$B(qg0>^Q`WFc4T%P7(tOwx#C03TgWl_ft5i*|AXX5jq-6##Xq7?5(Se?|3@lTbq z?>!l0)x$Y}xJjEr9b*uX6!v*$Two3Pi?mU=HG|VHVVe<5A+qysu zr;1_Uhkk(o#l~kOTjZ0xo)GHw*3JdDD?QlmIf(2&*W`NbNg@3n{D^nV|AXsUn8H6w zg+Idx)(>EM>7!k(>s`bHcCglR*}<_96sCjvnu|L4FjurSRQ~;6lx>3SJ}PLDN{Y6V z1ST4igvfM+Di9kV`H?whtiyXyS@7?{5LXhz$qpxsg>?lH+dfOcMq)SBHLEN#KStYG ztP;9N*AE0?D~C^Kj~o_|;xpa%>Og>^ zAsjMz^2|_UDTp&NzR=e{E-Jx@m z1`RC#mv*2?x@5V0PHWc(MA6ooH)i_U-@EL+8Qim6cK{S~(*rqNnEB!nm^YIS-CW;h z#!D;9-7~`T15Qr2m>`x%sBuGG_Fqr!3lS{7kUpOZ_nWQP3EGtiof%i@xEF2|%@$6K z=_pi?DN?MrD8Lo`{Af#)gpM4r33t08UKe=!T%X#lt>w_k^@hsmN+D3u`H_g6_Y_XQ zR8NG5hBg6!)@*D{k`15H-k)-W`W)`3BSyxo)}*VA7m^*09(L0@(PMDcOf8QcR|ic+>0K2I}f(kU!%G#Tl5VCydJ-abwMW$PRx1_VZdXI1!WCQ?XItb9Z>iEH zdAY7^=8zaYkS3F-kHImnlFY1vj9~4g5p4_`PrEr@Qv4*%+cEepKr;==ea6?G)X@cC z=z6JlX1sTjPl_j@_L2O|4WS`Cx6TD#l{4IAZG4;yZBHj(@2ceNndwo2vtO1EcE`Dc z@tw?D{!3>e8eyiQKhVwt)GO3?Sg=xSXEnTQb?fHB7S~nueFdN4evIJv>4Uu2K#(mV z{>O}q0K}mfrc{XVV|SchG9YOv;sJ6!t`r8k`ip8$&cIh%o(tU`i#@_Auw8k*s1hgg!rM58GS9j7c#mgq-BDVFop=?!Y!LI8xyPo-^^+_*y zr-pO#2a3-nZm2Impkh;94x;+a&Ud86^C>pLq`pZ~*{o1)5(&K*`>>Y|vsITUhZ`Yt zwvnT^NIt*|a`o1~z^{*+=(hvOtZ}gaSl}aHrdnwBX zv+?ls^YBL>s9f<|30;_fH0j z%(9-SuY^bNee0njsZ}PhR*zHa4&jdsiT!Fxcy#Nt2YdfXG3Muzas@n@&_rj^400^od<;i<> zydXmDRf)~FtjBH0ko$PkF;I;WZ+xd31v>?@GC4-8dw=Qn*JQclCHnVM%(^Bh-ZpFo zR(~)ZjQ`y-l<@J{*|}$1Jph&JP>;bc%bW?1er&rLPA5)Eob@E-k*2BS_JLWFfl-s< z1rgQ{-LUMjAZwY(>m&2#m_p{_?3>12gK8y6ZUFN1#ZZDr+&-D}@HuyR^SShmol}8V zn-gUz_~n7-m%8S~S{4NL7&V!h$;yi(WaN=jb_uByJKi%u*81&1sDr+I%^&~cExy;_ zHtR_8w7%nJnVY5j5!%^=Mg`Bi50%(l8U*wj+SOx_>fX_Wq$8)AIsDJvXVcwKQ~tg+ za>m#FfB#50Wjga=1K&$M@U}VMT~=0YPV?LUo#a0?(jr*XVm(ZGM2nPYKrB2>;xY*O zdYW3K3ri1ket9Il2`Nxx4|rg#+DE#qBG@>l242-_(>r7@;QX2us-eaEOikWA5&c*V zwkm@@R?7Dtu80y=hcl5S-$^|i`He4uHBPtbh#7VFiXn`UDf!yMUsu4-Fx_#A2r)nx zzWFFw)1Pq*wDCO#j(#kuUmTDOKAP-`#!U?hR#jebAkhWh#@0XmaO< zv>#lolbM{Mm7$76!0LlS$X382i9)#_sTXUY(-{ESZ9n6quJIw_mDngiB3qy5Za$ta zS!$F$Tp6wF6>PSj2({O%uft4hA}mc*eB@g%;N+!dUGLAOHbk}1KP=*gGIC8)s3e!m zIpNieq0vNHvUYi%bF6d zd>S}R052q7SGCPnn=mr{~z21-v{~ zAJdQlFpMXT>Qm0+Px>hT!2k?7dwt3GAp}O8Z*1J;P|1DIhC^qr}4* z=dp8!p4!^yV2lb;jIMpy#=qZM?$)Rjj&uk6ec$av@AjTMC)u61>>}L1Eop= z_g1Rc*e+mdwjLvYi>OVr zRkijy$aPL$XHLF0T6JJF_I(7vjG)yWd7;ME*Ru+v!vgR~O{ZdLgWpN_QP@nz&P*+~ zJ2h%mG~k_O{)u|pyQbwR8PVkf=`j~r0G|u5@8eNnv$SC>%?>ANqB5~$>$42+Q+;_@ ze2KZnk$^$X?o7U=3Klij3#%H!CRRYF>Xj#$Y1f#oe--FgTb_eedt0Z(x;#VdqJ&~M zC_mih4c*7zjh3m*KP>u*UT|s^z?fW_M<2))FN+ztX^wI_w-b6`794-K%s({g4oQ#h zNtJSY$*ZgF%AS#Mj^>dfmh9J^GO@~lMj1<1tze#ar>PBX>S;UYWt-u)X9nBy&jJ(W zhY=0wZAUf-Znu5DhIXprW4_AWI7mJnjt+?R#(%#*Pu~L?S)p1yxL&kof0$mFjrb-< zd?;5MFg<~$0Y7xhKHcwy2^+u5ywd7?t8^yXIpA$J7BucLHk~-Sx`=M&)<(0lvX(I0W~++)7@e zJht(aUb$YTbomY_ckZA*rL*|VVHQhb{LZC&$MY1eY4!%p>7aX1KM_JNXByB@ntR<@ zHDRUs>C+33q0>WlAoS3FXJ43iRqFkMa#P}daS5ykXZ!?njkVooI-JwzyH<|+Qykdn zb1~LGZm7oH-7A`+J7_@|TTp7Q8m}C1>3k?b+J@1M!%Ne3`|$D@w(yQyvv=_2i3iOk z6HK<}NO*b)oO4U&vp;M@2Ud9-7|NIxU!SaulO^}eRb+$7`ES?JFwdn)Iv8(cv_JN@ zBcsyty?=TnT!HNqTpITXQy1UU-mvT%WdCZT!(+OYJ6ec*mSme2`s&w5LFiJ;(&-I* z2*OGEK&d1?tK^wb>&mx2dh9lV2E-H+fOq#j{u*wSp{?uaL8RY18hty=B5UJq;AuYP zY71tdp0lB3%l(S9!9k9`ys*2wd3iQdPSYv>X_r>*<~#TDE(i-9dqCn7vtkTl87_{jg8%22);Aj|8!Hh*Rp=N#C<$k=|Bm3P&~GM z=dOE|^noYCoS*<2fV}=bYagdN%t6(+;mR(SCh}EpM2I)XJK>Wh#xg&ecuzODF!`>0 zReCuC`sBYck4)~i8LkRrcR6OvOU8^o%qeHL0Oar}W4h-pv4^kH%@z1HUOy^6EacS4 z+6dkKMJ@DetmRW(e@)z~?=L3x$1)SKHcPMauZSRCE`j3u^C`{Me|8q^5zorg2m zz}LjoVJ7eEelIk>Y946ign@0S32Pdz5UHNF2`|5EX7?`X>2UvPnU#>wj4qr!-l z9HP$}2h&5oIkuJ0RFV;4YlG`d<&7P~ zW?$O0t2;puzmHBmV$zCc!AG@GZgGCyR{DdLRju#BT28}gc*VOgLoV3{l<0t6f0nDh zhQHQ7Br9yAlkI4DLu$DMAiTHtq3=VP*Qjk2M_-(_Acw1c6Mrmhtu!c()3Hm(`dkcu zOe@Ino<{r{wW{th@ja}XG2%4yQI9^_0Zo+ocWq6SEqDH&@b|%M4i&abLgXcqK5~1e z`|aNg=zKQwb?(gU?E{r9kF_NEstA9duNU^?u5TZ7LDh$z5>W|hWut5T6(%zLGP=x0 za?#hj*`S}R;iY4qLBlytdob)`%AG)!CiZE6=mI6Q`%MKO7-RKbRZJ5#p2_Cinlsd* zHRI>W6ZJ9q<$1)|^!q3Xb89!S&}ELg%;a}L^2b3b=TF*l)RQ0n9((6;S9xBhOV*{n z*us4?FCpG%VeB1Mc{)3;^oFPMHYvxFw_4JJALBZxfF*tbWq_2}#~EguW&%$G>0ycv9S*I@9tz8$`4~}X03k5(^2*g!nJ>uPDlb5l4x(aK^9UjR!-mJ5N#a4`>OI5xI10Gtmvtn zHs4x6N+&m_E~lmI-F|MkU6VSV_Qzp|?))lm8bn?J>t1UMTq5A; z1YBZ}zxgG%_nw`Pm_wsoxA9Kv33>_$)zaMdW7MRlB*KbW+168iqhzv{%A?A47id8!^rt!%Tnx0RXs1xM4g|Iu>(s4`dDB5j8`#RtVq81_n)Pc*_k8DXJ5tr*uxNBjKl{c0@np=*RIkwBvf_g1J6Aiji7zgu(rs$M7XqT-!+46*g0-F#CR&>Znqs~Ps?72`~r6S&Fx=@H*)tM)G|gPRKie3<$W_%eT} z=4jIC%aTvo=sEzxb-0jM6`=bB(Uv#QJ;HIB7`W|itmND zhO)V)w}cK8rg0n6AHP+RX8*V!w5V5dg8O21nWp$hu7XPgN=mUqpJWa%t=YaxCAFD(74HIQodV(qj5;fS|hiX?rE6+iQU3GeIuqq z!)CETPC}Az)C}!B66SOy!32axUly-Cc=F33&4bnh%*djMSP zXAd^2R%qU-OF=FamOB-k@AC_K;)~{t`!i@$!hvRL!4;m8IIpM6%nBa=CQXXe#tidj zxV{>=6Y+q_mHcwV9tRSX({MfG{-P8}AV^P_^Us0&=V%rH?T{i&YPF;?f{u%c14WjX zO7p{3m3}9#GUzjmwat!?h1rS>o%(SgD9=h^Xv+{^46cI zhUFup7!kH0S8C_dxRfuX{~YmujQy9_KwEifxjz{zq`A?vzN>KG9_oO*@P+~m7)yYm zEi&Gd`IpE4W4`|VlMOI!BRwLV|DMNx|NY+%-@hhUgowmhYHzyAHd^yfPTL;H_RVT%8;WduPrKo4wI z6Y2k#$5;r78d$E9-Ta$z_*YxL=m0$^W5fI}k1=p75P(8Dx&Egy1BPmZilIF(&Rp() zdF%}d#o`+cmeBvZ=lOFQ0M&+sg-e`<#X8Pk#^8>UQ+jjt+kU2?Hh_BNsA8tMM zMBM$Ppw0vTe(P7K_|1xOsxjE>nWK4lSX!O0(oC%llT5(bZNmmz)kZMi!$8AUpV~4L z6YjyXsGa0!50 zN%Z;uZq+N$0&Y)9mP}OtvrMYXmoTJg|E-%b<_8@1$!+xBWE5I;z=3W-UG4XG0~wSJ zj2?gS-0zr2nXdp>f-fvC?ACv_MgK80S4)6zt3>IigNT@Ufx&QzzfbvhOU#s?fh%x^ z%Hx4LV~{JAy*8l{Ai-GXlp&Y?vfc*(cD%hfUK`HAJ9rtX=~^oNoojga70{JibCt3J zBzzjCC6tTrGknW8A$KJfTqD|2C|8VF3bL2XQaRf>|E`0CB)5Q>V5C9a3REan7$$cf zUMwqh7I;}=;9Sh8wlfT>PWd1{1P<- zAkk*TXY9=Z`eh|>Ir2*S7m>etvjQXFzQ(mOh3{0B7}q%o$uBkVa84&Jo7}s6b2}y5 zLQ$(RR6)5UjK0s%w9U`rMdZ7CTLglkPE;e`uJxA)m+PYEm9134P(bFR%weXk&SOp* z=e#+>%4CV(*KMnsuC<$S&v^!1D(Cn0^hA{VZ`Pd#6N#-Y0f_cTFb07{%$&v9*`wC^ zcjN`{`X01CHPdQz%t7)w5$F!41^DmpK0Z5=tu(EiJ6@Anovy30bR4e;vcaV^vg&2{ z9o8Dl;CHGl&HRXhs*ct-9^oCVacO5)bjysb=FprmrkmqdBiXczif-RlDTne^lV0&a zJH;GlC)H$j9s)8xPm8#hyXfU>lsNn+b;^q8wcRKeU+jo;{Kw+_>rV<_smI0$PzPX{ zDbAxQe)PO%Ou&Wcl6y+)8;kkP9#;LYgtKaqb+x+D7v+#uZnR)k#JNmk={PS3q_^M2 zqkhibd4|0#0J$TIfgSsl9r`T(?lVWlLNZJ7Qr7De9y2#_hI0nB+o#v7cVMHIaRGUT zrj$>ww}<8PQagA&NbsfD~#wX!SX?U;Y_cXFv6zO-g3FB4oVib@xPb z34M9vsVxpjHoH2kUE@?EsQ=+fc;hb%@UM18`>qBJVQO&R`n7;XoDH(dSk&%So5{1? z8=mD&7hQvs-4Oq{>+uI_PsyBH>zo{;IXx;CLVKtdw;C$eTRAzJr$%oF&V0_DC8nCn zphZqpPCI6l>gz(QzgBU5k`Clo->E0iovK|tnzj?d&6|avtleR*DRb*rFD{ho>6+cm zE5FwRaQ49VYPBsqfhFAR$lN*&X1IEw{n%hgF) z_X&wny(ZXb!imasfjdL}X!g9`PCO`L@!JsOVo@ZAv{x_HR+!P)$~GD@{Kx`US~oBm z>|}~y7M0N+UT^iL+EeN>yEuX{J|9poO=`9PW_^ffGZ__RvaUOvn?Z6aDJDPIoF9dC zk4G2QBj-AdhC?orgVn8!-epHXpD&0yl?jOQ<=O;+oh*#3v@GY;Q4zPX&NcfP)9ffu z%eZ!KCR_UvYF22Kc4;{#AZNA0WspTq7M*@?OJ}R5+8I5tpI ziRp`&QZ**i+W~dB0OBD9UkT8)7mycX7^_Gbr&Q=WK9h=X=Kh-o7D4<~&-fvgKohF6 zV|jvJ3f>38Y5>^^6}eXZyX_DA;hLLQ#_JCEO@Wy_PM!!nGBGX%7S?U(dk za>7@LHGU0a90A5)2$iqhBk?APpdVrYOc`13(q+%@uJI&-Rjj*ikNeQzT$X4Y)$gQ~ z2>neY3IE%)fyy9IalrD*w=)55DYct(c8M+2(nmG;F)xo6FJ!o5BVUXJ|D($BDYUAH zsj(qR_R3Oo;Qp{|ejRv8=R4*LogC8LJYlwpJNVH=gsgC$Z7JpnxZ*knIr6-O^1kDf ztW)E#q1;?LVTK6Bj{MT;(J4LKDiUxI2eu{HJ-{`MI``hY#+~hWi5@g~6HYBm0t)Cle z?0k8CV@nTJv$#_F)<4GhU3CU{e}UBBOehVk%EmET5XPlhZqIMVYkf>*{*!SFn^c>n z{+LDWd7oCru4r4a^rU9Ky_&u;u4g<7nqAskg#MO+(>69Su>D0V01w*I&=_;wCm{WH z*J^Tg61?`9?H+>TO_Vy0@pV*Hk+y~bx4hJDj9wm4-qoveaOjQqN3;8`GvvT% z9c|AQE21baXOc$e!h31xu)-gu zNmIzwDH2^3?+%a>Sr!~~?b#yzoV5`ow8^DR4% z=SBiqpLX6_GE zQ~(IfabC+-G!1{zZh;|=HJWQ*yAWv5F6#2_a=ki*XS$zm)U!|$&1(Y(#|{5A?vAcS z%59a;Xo`P7A=k5r!Xu5aYd|+c#~`fR@pxBta20)n{bwLA1G8b@@8oSHRal>^k-avY z5jnazQhMlr^s{RFStp2q-ajN4C&c##?P0!cN$r9>Fqrj4y2757ZDLW6xz}D=>cOFb zBYNZ$pr@{(7e9QCsw-fYkIiPPbG~MtG`T!Oqhl_)B{lRxMWu^x&``IlsF}ZkdCY%X zy=aYx=}2ZG&U>+dK!8>+E+!#(5x25A&oARIP;2h)=q0bVQL9yfVdU!vDy$8MaAz&4 zT*sNtsFsQ(o5)(!ut`P3>PK^sdf;?efmgC~RX;`Q6?y1-bkAalm|gO{Lj2Z3;;AxBi3)*^UApuKC#_>sY z((`jn!mk8^mokb3(xe=e5i0~=Z$EV}kyYQaCy*wr?*&fS95+2fiChLIg{=WW33|e= z-|~GKyNdWmn1OnCUDETdLD5`MAiCOPpDXx4!UUh?w%T6fDUs+flghE_i1TVkWSdMh z;|&==NT);~MawGdc?E#Y4-ed*+*gYm#(CpYjm>6oUP2T$y(acKS5TIEB15;oNA^f| zcnw=pL4oLV$Zyx^32u-Ata0CPr8XiCM7NZXzqPrD@4Jm$NQf4iEFk%2|i zt;?Wp0bC9NL!qT>CS$YLZQ?@UuqOMmzMEb}N9`bJQ zeDZpbm0v>Us8KCwHJQ$?@C?3pR!U_ zPjp9TPEg2fZ)_{%ixD67r)OdqQ^l5GNhF$G0^QW2D^7 zenS6*6H%z8_Q(F6feK+rQejQjP4bG@PNF5;WEDl{p&PT7R8^c;7)%gO9x7!Hm56$g zSfOs|eldYvWN!G@R-Q2|;pnE5)by~G|G3xJsd&4Ph3;-JX0)JyLmY?nBKWJE%IStWuaw-8>~r}dpIUn5^&&GY4TY5@zXWSJzy)KTbbaN4 zcbBo^{aPa1%Q=MI;(;k!2gM499;4Hm-g5KHTsWoptvSmX4;~5I0XmFj!E%G&4gAr7 zUnO!k0fZU(M83bP$fLdp3z)-x|FC?fb>PGs${2g>8aUv?#B0%KdMAy}c0Dk(c8?2Q ziUE4NV$LeD+iN6?ZeGC-8iA&gLr(`jteZNR>IvA&sPrx008lNd z$+wEHNtSJPRCb6s?yv_IGQKTaBXJzxZ5%`t=4~xN~*7e8tA;0837}L>@*jroUH59V9^=|4Z zk1Pao5roiX)&=xa^W=M@tEs9E0aK>t!e82%hHrKPT4Q%8*VUd(oI;!WqdsGyxfRt> zZsxf+9B((3%1;@%Yfb%lrAHW(z|CZ=!_lb$4i7Pa^eJTsNsipl@cOwGeS-}fBjN1qaP`Alz^oEHdBwosNJ zUI3FYeJ))ac%J)`zA*2wzsOSIcpOQ-_jhBjQ~tfs8jq?s zuHo`aNciVuC5J?^eexX*^=_D~chbGwjdl)jSTk$s*5RM5f$6nYHjB9D9IbwwDsG}2 z8N;Aff*`IURK&tdB2*b28qoSDA1spB3ARV!-oR^IHmJ=v1E$z`jJ2qJv`9?Yr+)4=Ns&e`!yx6>OpT;-)i3TbTAE)^mNKvsOBAwPkf#!ODAh%0n^h}vHRAYPv8im zYjqk;+m=mZj)!v^dC1)l($2puC4AZ+u%(rHVLN5mQj4d9N3~B8OM4$#VtvOJK*$fe zLr0zEUA=@sw*hpDLlwelX6O!XH&pit-lsBY>2$}f94x2DgNtx&_3sJoMi%}y$EjhAd?q{Z)tfeUmQiCl)paQzYi&ctR!RW@ zzzn6Wk=G1E?ZK8(D6pfz8&!Og2elC7T?ZHlL$-*TCe8`Ry zN+kvXkM^ZE%KX;`Q4H5$^?*4XLNj+SvCc181_)B|f!4FwDy(?$d6Rb|Bg$Z5udhc2U%=S*oB_1um_#0nmY!oaTaW8-+(t6(nMyXzyjkP zxqwzzWJh=1L_GXKWqqjgY{f*WcT;WRFG)Fn05@pn0}8 zQLce0?NA;g47haKM2P7sXfjFr^pz*V6$2gy3=Q@mK)Y0A(3NX;B|R~;fIy*Xa~J3O zY7$d%V6o~pku0CVGGmyT@xG7|Np@no^a1@subWhTN+GEG{j@~=NKtQylm$VIfz5ee z`rwWK_s<%!!Vp-q+Yt~~)D)%e52=>LdL&=hruhzv^66U&yGzC4R0|>i-Eq`L2LT40 z83j7xEEX$q;PQy&8IcmrR8F1?LtS}pgE{tnQHb!0OEu>?`vlUzhaaSlflknh?Wk0HE<=>qeKHZY_~STd zd;uJk@gRKh`MYa}Pnlngx>=53e;4L9RegVvtisI8UDnioCNiMQgOx95%r({VNb9aa z!P-hS4Z)7+-xOIe!|mSbu5l}5tvBAnSdM8PA?z+PmnMGjXyIGt^%pe#_nnpM4yt5i z1By8^GU+m0P9z^rGw3O2C#!v-x|@dUBB6*Y`Ti3nHt{6d#DuV$o+X zHZx|?!$7?E&ftBFRW)3+nfV1hKa?`qi0h4i+fV`iEpsfVayyf)qGQahC#OZm%RXoB zD~o(fllbvg9q*MG9)Ue|NFG-%w+?pC@b&aR3Dh39{x}2$JLBC&CI;v%ZrWo%*GyIN z?UC@0zmV66RM4m>arheSlXLY&zZXKKT@*Yz+#bsV4$d7hG}O5XH}T-8HgWsl(>g0{ zlIDj!@RGnP}pxGToJC*rCIRniZX^{FK=C?G1cdXD54RrHw1y zTHtxh-}b{VpBfm?e?pLA5kQPkZW~`6ll(mX27lhu779p?t0z8XH!;aBi_?Sd!tKK< zEpLv`h>^x8I^e>3pp~M&OFT$OgEVp|`(wicjN98>87A5x;iILI<&jP2ZhiyuLc5M> zFZ5e-)xiP1@UpRj-3PImm}$G$2d_jGFEX3WLxk`1idWrvjat50xeQ)GxvubYw528cSjX>MjRP88qPA2nNn&%deQQTQkP0GKSrxhI zZ=J4QnQGf=Zo#K5^q>G*l_GNyG%f=8LKLl+$Tfv$2txX#gMzuBYZ4-{?}^F;W2zkC zFrV|;`L_;@h))kvUbCl9ErrT=({=9Y4PG%Dn)iA68`XDNyX!cTZ}jdBwcVBnX&*Jy zbHph{5e{af3-<^>>lMV-L*d)6H8IFl~7mG%V+z`yp z6rUTZ{u8+18#MR>;3H-WF6AQHaAq1PMSl9d29d$U+yPI9NBY+^4;yMxZUzNy$k~eKWrRc@K zrR3a{1rVVhm~825C%8=j08L)!UIp2Qg#SCm?f>Bf+;lqzknoMtGAFp@f%F{GkM}C- z{P%WjdCqq>Xa2&8KOddqG1hlqDmweQ{Ebue4F>0U!xVqxm($b}Hikp~=~Gv=A;8bv&X-vizh7$uV7IXpA+H zUcj#&4zdsynI=t4|JUOz6<1SqJ}E&O0O!6*e$m+`i41`;YT^Mkp#dXaY@u)pOs`S*J# z^$ke22hN5L+eG8bR662}(z$x#09qxI?IXv=VEw32wxdaLvI@cSgx_&2a7Vbb4M?#o zOVy?*M$AZSyhRHefX2UVICg~-(_yw^^&+7p1(XCaQ2^5aN5@+J5e>DzB8eTVSQSxh zR8Q#)vSmh~*wn5I_q->;;Mz0!4`jzNkBFt=p)(xo=}{~BRl;{G|KH|zs4)uU)O2Ak zG?Y8{wH}Fnst4pr`JsEaUvkxf-rkE`Y0tPf^4z4GyU6V=zQr<})M$YQE9NB=*XTzB z#~dS)`1!;;=YL0g5;6l`Z3Jmb?7%Y~#3@u`$oA}GB~K?isHn5;rUSa*hh6(TkT#Is zP({n?wDr1&@M<@VFfE4nYD|WPS0y=@`w>tkyS80Y1%HsQQ%dpUJJwEc_O2MiHf4tC zzof31>dSNW;{qfR=1~C#cTd7iToNxsf}ErJSY1Z_!q}bZE5KSc>dUj+egaVw*~)83g;u5lQAh;0{By#lgarj8pL>P zT<7wV9`Hz(Q%=4XE@C&X)QaXGEHlKOZ!owfD*C%X3?R?=cviq9N(V@r?GuhlmrXC9 zFW;3)R2qrC0aGW5Pk6)p+jsl(i&g-QkH>`k4m1Cc2)WJ(B#5dvpZ-OR{O8NR{q#v- zSqHf3;_pxNkNLfa|4!5AuUh;p5^}7&MQFZoQu9g{y&yg0?XVmw<&*j z_5Ngmz%PLCeASll+bo~ImSKTqmEeoN^NRmE68Xn~82n=0`5XB8Kc^{wthKPBZ+;i0 z{)>i}i-Am$$AY%MfrZw4d&zYk%t&w!uv^)~hO9ChU$D+qN zz>Ql8Tf}+?@Pxe*4EyUp@k&fcJ6Vh5sQNPq@?ow(IZi_8(uoR0L$y!!Mb) z{>Sc5jz6XSA0mZ53=p2hFIDvZanFDMn7(0^<=;0Yf0+@G>W|Mx!~e(bfn-LJzh&A# zev;<}%+Rk_IW>N3h5orSd6E{#e?2KcDL=fROiya|>fDV?`M`syTY8)-Y0^G10?P-x z>u!^+XGf6W>O4~vfBSy%mH$DRc(u#s0#1*IT-E>_J1RgcmLH7`0Ija zoe6h@n!&xlUf%ycuy}5IXVMfesl*#ilmC(|Uqj&ngwdlriS1!_3+oO{fas208r|FFy8R(XyKW2j z8ww$(1^xFtd^U=^$JRPTgceJCvVdbZUk?gBEjqIwxShiM3#rB*lVvD{$LVINLX%Ua zyt_9v-IU=y3H(6HNmPmqbjXwmFBHvn$PLUqLCxqR))r~t;59=kJ;AU%2}U&)Hwxo)=J~m@`UP|`|L8bIpmD` zZY^F3rQ<+q*k_pn69cAd?=H6~arn=QaKw|Q(BRAFH5A0Rl~2y#&e;BZ<1SN^1(K!& z91pu!UT5sG*_g>i2XE9og$2=BMY6kjHvlFj(iV1A!^%FmizrNr?iUnql zWZIWKGQ=5%BIO!W53Cb$y84iY>$2Vp?fa* z`?bu+Utgakt9Z*rTz>s2CEfAbKrzmj{pU=e%dRKC=#hBe1-i}nF!PKzvm9ExdQq8_ zJKQL&-K&x@4+^3E>|ZJb+}X8#XT8NLE_J3GZF<+^j}9qcM$l&W0|`izz?ByS?+N>C zjo4k+r!eb?6H|$AsZy@<+r3}y4wOuWd-&Ic&9Y)=H3P3CN082tiGz7u{5>#s(4QHq zqU6K@DK?zwI0-t)d=*_oU*HjWz6gPy2E7lC{!(<{b*}YuX-j%c>sx$pV&g+x7Xz z{Gb|uL2wFe`RM>`#;KRB5!+HN%Rkqb6)ju1_3kwTw<-`Twr&cc?c3@WMdsg;JLCoK zZR{0=(AHEiCW)V$rWNSlZwRmYpH$-lqmywg%<_A?LY<+IN=e^A2i34UR@%!&@MIY?0Vcdbj-FH}TqJQxk zg*xW4+%G(>dTCjE5fy|B~3W$Rebi=YIG#?R#qB%0eaNKnz%!;oJ@=L_m_Kq_9f zfUU+-%nw$_zhM#41v<&zzy}b6Xon8mOenmSk})D!XZzv25Fj~UKe{K>pZ!A@YRpd!sKGdA znSM#>(4s2 zf#DiG_p^S7;u~S{CZWKPO-=K;_1C~~QuEi0w;X~*Xuk+Q+FuQQ2cPGyKiHjd^7%BO zAlUte$-L~z9n3_i;ed^hm4le`*uAB-w$ZO_3fK@{z0?WBbTz74Sqce`I@^egBv6DK>FR zvqvOGAMQ~wuWAhM3*!u-3YELlLw)6HM*G3-Bc*m~EukFLObu+39S!Ff3|Mk?i^PXeoq5+lW5}a!#1bTTf9uXW|uK2aOrwD3+_ohLBf9R`rHr! z5oWGvZE5m}M3Xh#sXmN7hW2Ns9e+*l5}4riZh85+t^jN<%0|?KMacHe#_aC%3W1AD z*h;|cPHGzL2~67`3^9K-|~%1 zO-%58qozx?2c%{*N>ZU1vD>`;{$?a%f9WkGGlA26ptaY_Jg6RaxUVSj3<^v&om{GJ zYOVqJtN=YVDKNn`JTF6w0gCMz_@DK&>tUHhLCS~+;Si@MJ%Jp*97M$*2LS^PV(ECk za45Uq%}L|#WHGI!WfzOqO-OKvljNM30(o{TnK;f6y})WE!}zNKPJi>5UB;w1H507G z&?JuYF}DQm`z9N3wKKr@rRXH|P!UKIb~xyA3m6-Tf&jXxuX*ESl%#<}VK|-KCGMax zj!EzY@VeMRhW6^OTX%9uQy`1J{H`F22jyg%x%e{bZcm*#&G+fd6Zwi^aa)MA_<9=E z0Fv#lm9xmbZi*;Kz^~$d^|&tfb8S@$sTrW$rVj5~ei+%Y06NG8>}l;T*E793)6Et= z_n6d7ebm*IpSPK_4sT3gu>$o8K_o0H&GOV&G((UV_-CFaqlUpf@@+@=Zp?oSB7e zOF;3ntlttefeK~@>wur$w7`j!iMNvfru$U;4H@W04A9iZ_#)wfF1dIr{6JRGZKnGg zFySH=-jy`$I=11jWplp7{yjVzc6JpQnIjkgdnjjdO(e7gs_KqdxF|+ZNlqaUZ8tpYq8P=Ujheo9@W;#)QwX!o{-s@RQE9ETd)fzMX*Z zsV*`6+UPf{OttuKWTDkJ5s{Wz2=|P~YfYw@s#?k*U;ZT)Mt-xK+ekXrp8Wo6sgu1* z9Q|ER+fhJ!i0Ebw7NwKp9{6|HlvEH=S&K-P3vlIkpSvCRz00fBHacQF?%J#D1pck- zvzma1udOuEbs)i}wKnmB3MAgQ92S4EWi=Z(8s-FPQ-CHm z);#@yHEnaR(jB6=|LIB!toJrxagVvt{_waY?9VYURm{KEKu+ho9Pg0<824%mrjR(j zY|Rn>Br!(_cZYANqt&q`is--Q)~}I2S!tJEilLx%N%xmj2WBT+0b24!FD#%-F`c`P z1z@EnS=zO-F!jT=34qU>;^SGk=VL@14Vm}X#ZAF>ObHz24i;R?bwaoAC7TRxPcd1H z7Gup4vN=;F<*pu=Csl{~*3Z@zwOw@4OjP=bN$ZX{jWUdKzHh%guS;LM*kQotyau6XO8rtx;Mh4JcsvT(Pe8WFsZ0)xdV^_R=(FISd9CTQ9x(> zoDvAm1mkIw$y$2uXJ(FibNpsN%0ej!0+<$ZL$}>0TOngLvGIv;BJm3#HAjP6imm{C z0LV(De)h+%JCHIpXr#-6*!>sl?^K=NRCHbYS&h&alC3f}#NO2T9AO3YdieN|{F820 z31QkAD9wU*y|5r1+_}e*+zoQDrpI)f}Ar@DhK=;q>F1}noc4@b= zEeg$LM(X^l=U!X+zsXmPxycWBTk(K&JCLFmBQ<-YWB;pXTS0arEnvy#*2AK=>Ivd^ zfzfJE3bQ-$zo5qPrkB&m#EZ#I?3B|ra}0Aik2MwGUM3!`&^>Te7iUx81C*6+rt(>- zM_bQ#*QbV8&l4$K;*%q`es(N={l|aCWo{ntT4nspk@rze46m^+b;8{|pf|G<)u)K3 zNsaL-V*Bi}eh-mg9JeF7Al91mtr{Gp#hJ0kS8-A8-=EEol0TqunQjNV{a3okF6vB( zuMhCPY!*qb=&}XxC15F?+fL#?es6*QOqu@<6%_N)7m}ugfJHrM_Rlib24}hGRBN6` z1`k#R8`M_b=68@PW%!x)zmn>YXBP|5_mHNTzcp;EtU}G|*Gs$!P^bvt`Ht27MG^ng z@z4dHpl{d}*zY3~51{G1_TSt4dGqw?WT$wag=tYog#Rts{^^r^=dtquQKtF%*Z=VB zr+_cZP9gL^R(&x8*l{|f>>nQR^I<##)Xt2~ua5nHKl%R`TPtR4VBiBaI>!7G8u-bD z{|~9pvQ*ipUE#IynZ6kKcMhUuRS1sUPh+=`yGf%0wSoo z)fr~LOO^YRX7V-1!9yN7s^1pf{;_vu2Y??`2oY{F|GvjnZot4OJh*h}XJPucjqul} z%LpKW;YPVs`{vJF^2@va@x_YN2sOZY}_}4m9rsK8W zR4ZgUx>!l7GEQe!UQN$djvn=JOw#O55x-%lm2%qdtst1?7Ov_Y2o6eENU??`!*627Sn!a$dW#T_39|UnTZJPQ zVF$(^7#Dk$d-(1Kt2s^8`BAj{7 zBKFfvHf%TOx>(a=>}Yw$qD+xoqnTQyQWfCP=*OW#{IB;naxbMzG3apG{KTnRYJSI8_oC__D0)|It>&G2smUgL zRmpoX_K=t-Hu%tz=t7Iqx#}C$?!jl0Ge!F@t(elB#cxi3D*_(HtLP?Cy!_rNL7L|? z%n0X}@g%{^V;?di?Qi~KGmu7*q4k;%cYN$X;!$pawDTDKrH3d*F9DNVOr5Oc0VLio zhm0e#b}EmAinpY#j<8WERS5~_y1SLI&z%n)<3oNDO3F{jM7Z^%nAqp8jtJVF@^;v# zV~ZIJ8jHF?%sY*ez9Qq3Ns#G-k7$TKbDi;Qu8G|7I=ILxTxoM^^$wD(f)YIx$Ayu* zo+xCy!LNsx%g5E|um|MFUy^eD%pVy>!ml*sv^JOP+?sMDKn))`w%tAB*8RnfglS|` zbN4_><=xd_UNKU~nTIDIW;lu;=5k!aO??e?Zhf=s-nqaT{f14jGsIO43x$63haEb- zpq*QqKT2@Ilnc#@y4J4bk}j)!_uHMyWFF024I=#5TY1MW04K$|vvI^GG4OO;t4&uE zwz&E7!i(5HS^7({JW!_JUJmAb2MeCpz0#E&#DG6?LqILI4ohNrd3SS+PWoJ;oj@KG z%WE!b)Zm`Z4#RE;FgA2qa8eDu@C?@-s$q;ubY@P&1}G0{?Ol7Wk%(czsJ2Z&w>p~f zSBXdq_;a5uZCuXTQ5PCBR#)bRshfIO%IAB{o;GAKu;jI$?Wl?iqy%n4DSE8rZllEd zR4D=Kr6RZv(VydLO$=Xc2)iOf1{_vI3qJ6U>NjC0o z9q~tQiT%D|<@N~IWa~_dE zmewFa~7}2aeiWwZd`a}v|jSZ#S>gx95F&lk(iFK zd=h?zv`e~~jpghkA2cviE4J%&zU#gnCLdo~bH6ne)k@`2@?Lt?Pc5Dg>wp>DG4)X* znZdFD1g%89eFj@8;JA>jmTc&?gyUoL^@z7>X*#Mcv~uv@+B5ygAgdI?CUm8{NpwhOaG~4>mm)$;9&=+(8#T9jzhfhSn=M_FtPOIAX}A_=38 z%c=Zoangx*M7yW!;9YUX5ri=TrSMY9U`QXo^KJkFVZ#@{- z0-hN?g#CCZ;WAr<9bB_T=q@>6zG4OXvs*t|a8+pBP^j`)>rk>7neq053ad^G+ACVf zoTnGbMaz5Cl<`R++g`59VO%F_Lb~1CbWKy~wBPZ$9?L65 zv|MsW-&re5eRv$*22LatRNrM2TWB8joC--QcM=IhW}t6Fd;Or|ObvXXT2*tlwKVMD z$5{Pa7FT9B_j-D|==>4s%%z2ymt?!;yvX~^5$3Oe@bXklTto9F zRz4~~Rl;_NvmgT4;6nuS47>~&0*;1CDep!Zsc^5$LMD)W%`R_dMX>0-GYI=;X&$@g zvjgslwYL`~xjGZL-Q$nI$uHYwR_f1mCvDqpG7%BHQNa+mK(&L`uue9Nrx`}I|en*5Jxkzr1$iP@@Y-}dhDD4i|ACCl4iN8cP$D2Id}vhr|Dh7&b8QzXd8T2@&u8PIHf&?ZCHX zACjQ3`VIE+A*Wo|j2d>y#bu0WgH>_ZZ)y9xLUuNTk7yIAbN&CHc(itH4|?#8jHN&w+K>7T;@UqYYJxExz=N7RDE$_@W{34}pq65-zRt2%)Z1Ag&7 zx@7W}@P1l&*I|DIo1K`boS;qXKv|AFzsFodpYF(e$cI0`C<<;QUAb{cx^ zqR^LTbMIWMRjj9FwUo`&yg?Hu@Gg6=_6HxG$okzv^yIv*`1C2uL5h_35U=SC0}U5=bOdTBZ$jqd-N6+xsu6C+>d)IvY(1bt z!#Nw}_@j^L1;@|bnmao@D(RxBr+2JhzDYlVj?bU$38HWr1!a0=?Bt%(v|tnbhH3H4 z|8&=6f@;*KLy2~8*ig}z<|vg}`iI)wQza=C;KI?QJkU5&q=aoIf-c2E$IUi8GXoJJ zUXrwXz6l57w|?y3WNOIAml(fh;&wuL%9T0l)^Z$C7eP&M0AON0o@wx&P~}}*s0PY5 zc%LD+GayxmF~?L9ce$qY`aK=3(X5>qkg2}KM83wu&7E4h7YBKhJ8kFtHa8E{oA!6Hi>(w|SKxtHVWQ69eh5vy)!jx~({kaR%#&UGM5r2G(#WLU0(?C$g zgUurc_r1}|M^!UsrS?HFx6B2x-=EideFi^Do83tyPg^>>cCcUxYjhrJW;_}A)3(U=()KYz!2WzNmvTVxlU6@=F3 z&-c8HV9v4{-DEx}AY3h8PbGlA#zpv}^CoIVNxl)vqHG2xPa)eM@QnLK&|!$-?`-86prPBoE*H^+=n}WlM_~uuoI?M*}U%L+L4_-5=Q;P9kQJnxK3Hk-d7wrxYQ+ zAp1V@kgcNhYnSMyrQIbG*1G#%wT(50Xs=>)1!ZJ?z-ldRUfE~I@%VI=uGg_nOzF2e z_lf9`I=dDs8WGN9nsE=Q01^aRHel{?q2O&~)%y(#({cZfRH5wC&_kdXRkv|X>sWNv zphLoZvnVpjfZUokiYNIs+B-d;4}%wil{{P?zu-@tDZjw%Ep( zSXKMr(&H-C7)R=@bO0kt&S?5iuc4yvM3g#-_me|Li3*6?i>2#_V#3@&nL&C2dByW` zZVW%orsU9}U=yW`B1kDXLtm|*EpV<|J3HXPA@_`C>KUWKERY!_wftKF8gcYra6kIMny|Qd*7-z2|D*V z8zw>D>#a6>UI%aSeqV~_M=uF>V8vpkEd%yY`Vv`>thNu1qI^;S%0KD2uW#opEkrva?6jU9 zBxT;C8Hp*F$Hvc;=}y|6M^?P%!D-61radk&K+Rcy7BF(nUVsTQXqtST$;mY zR9fv6la_}oSW*(LE4lNq)e%z2Q%%`f2pY?`KVLlaeh0gPfjxBg>;*V9QuWm|T8%}hE}>2ZTkjCG&7zeoQ~CrQCYjGBWdH;kmpt$KOz)@|3r*J- zD3m?dY+>qe!`oN}E^sHl&L{J#q&ZmA=eBxYd(O-Fkyv8E4>hBLJE^M3w^MX|sT z!{@tPY^FTmB;mD81l*+JF@mXq0=dAkObFTXzJF0bwdIlT$3t@4gRd1Jz!a( z`9qzw#0Y1PplI*HFI?g5!u?eV;PLC|^#U6z5Rf;u`CA}!^8 zpdG0H2m4xK*B2)W=H5mw$v!!mWBDn03|0`2xSw-#UA;WsuxL(rYJJ)1Yabh@Jl5v< zed0#YDGj&e{k&MHq+xoUtmmRg?Q)nQe`$vp1T&AHiULGtO|xgsm$67$`xe#e;--Mc zLdsDVCnBWzF7)e(m=ByxK%@#eY^Jc2FMF`$h#`FB4+nBKhyC}byS51lPxhckm_@mo zXchaq@f-%^UNs`X!hESuV;)`6ZTQZCAwRkK46|*I9uFPvVepdY57e9P!$D-m`l##7 z5BB;+V!?$ri*FHX!^G%DNU{IHQRdVj0 zw2xWltm|Hn7XqA95i9up;RXs1CGVsH(d-bS(n&)Nx-Z2i2r6#uu5Ii)6zS%<7HRa z(sTNOp)K#%S9&wnx|ukj1`a?5)GRir)vP9Lwd)17j&MyMy&Cx^$l?C*`^_-e@`IDQ zAU*y!1KO**sQr^!w>WT}YgfW*) zW;HFIwK|rSwET#^^_}%)3g1ufqiM#4hu4feC?Si54+g2VjI!qO-4v$a&WB)0L-V3r zC64>J6X}i39qXu!dQ<*~L+gXCTf!?I9l|FEi*{?6*%h-JChXs`_Af`C{+KtSn5Kt)tKQl$h?YG|QHYQzF6Rgm7K_YMgm zgn+2@UIL+n9v~$24&iOjcZ%oUbI*By-miCmiLhbstTp#qbB#I27^9y2he_`psR-5W zeWM2xpXIio-nrOXQYXgU`3*0CpK0YdilL12o5j`Q&OT#1e077ysa9E>_CtUJp-IK?c({Souw#`0^(pQouUbj}YTv1N$PCL`g&ZEx zP2a^5w}Sxh&i9~m1=Rh3HBKcGn-_YRHKjly^%aub3UGdN`G#5im?Q27&Du1O#vQV3er=@MKj&8fNqgSlb+~j+|II_FJV# z^|BI;lh@KdSK^@wt=_jCEKLW`cPhcSokuzo#C(;2nr9$oTb2+RmYax}tX7fPP6vX- zfn#2WM}qU{eO19Wy2V#LoQXRMMP|%S1}f(w_Qm*cSF3I}jZ7F<#_M^4Sw7Wg0f##r zL%K&}PA@k*!jS+Qh1$3$C#S2U+N!MgC_BaU>ZoNrC+n-IZe{5;HNB(?Ep`4`fMPr% zHT`w$s7t+1tAGww3;^PG-`;Z5f?;*DFsPch8=X;XqbJS3Op z*Cxe7u;RV^X=?fH&82ka8;0*E?+5>Y4VcC}vJ2z87vh?-ns5x~AA@hDO*|c+{@zi> zERxmQVm##fF0?sAS1g3Hsx`pQe{arJ!UR-8GQN8{ii%85aQqmkde?n?nG3noef$-B zi(n9WgI0XKoB!x^-t%uWKt-gdy7l$uZ6~(FH8cNkVwQ7z7Q?DpXEwS$|5N_ur91dx z`rByc+KZ&`I;Na6ZN#rMMeIv9JS0OnJnA_qnLO!G2zv2whGUNLBk~2qu2tIEw5rQD zoPvcN9DB04HE$n@fhFF?Kf7j^)h4{(elOy-FRgQ`N)ECA;Faei4~Ke_siIdpsCVs< zJp5){k?x^!#a+3b+dj}D%9`66S&S_TfILtE9lI!Uira`h`;uy-^Lm&f0soM26$R(Y zB5L_2**s<&s&Z$Y3uRwLI(UuFhQQe#6-w_n2NQF+N?~*!HFIP5xf+lC#sN>3GMZ3* zf+>OG_;Bk-0{JkOuTdw>{4p>Ki$Pz32wnE^4E?oUT^DsNe41Jk?di{D?Dj=sRKE0> z6KhwZwAq=0NUQ;*8TUFwt!FAXAnlh}8 zMbhml%=|gFBgT0r=QH}$80@`SE5NatQwiXz3_@D(M+cNeYw(VT+%m8i_X?idDl}%P zde2_0uE3Kd%+kR_(>@5521!CqJZ?U<0tv^~-P{thTy!}|9o4p#h)T`6sko}*Zy1H{ z_Zqo63{_d3=Dp_n(I-o}Fr0MWWvv1kF|y8NI=lJZ15lFLY@P+Xs@{U$+mCX~4!ab| z?i%yKLnN*<_%`8-EDPYSgnxRoXUr~7>;5Ht?SVf9;tv$BFK|PK+MZf#0m->XxUHNhLov}v%cELSt zX%WaV>Yy8zT;$=t1?K3&D?_z@ zQ?e*ZFDE8SsBzzhun`#Qo@IU)dGOlzGg+MRB#-g(ZM%s(!|mzBXs!GQ0cZ-G;Ruv8 zdKG$Ss9Ai1nwCMBL&UrRDYzs0pr)ZB=NnQ!%Tsx1t}fL$pS^?GhaSi3rjblv-$=n=f0C75b+n}{a)_Nj?J_S$+I zbKNMq&BYqgxQKT3@3Ky>jTCx#XlCR!Td&@;zZ3NinK&JwDWqsE{!2?hVOFQuDWqnnEeOu0K&7nU^`JrqJE#7G8d2rqHOg-4^PP=T{k1j&`v&Bj zmNwV!)0AW`jjLsLuDZFbALMQQ4)@K~XWQe9JeG#d+c!AofMy1y^_Y*qbg7$S$&@02 zV$0U;Nt)ZF%p*g?lzXcs20uJzNZ5e9QWjaF1s30A|5YJI&+4%il%9;6Qk z92&>{rx9sf`RwC}BZ*@VSiUGq`1JaoRiVlzuu3M$IxEB55`|ceJtIA@TBF|H{M2bQ zow5@6tn_ppc$ighrud>B4M9G-!|oS2P@H!%Z`eVs9CxD3P<9j&;H~hXpc+L(_t`J9 zkW`2jvEZ%Ci#0}k#CZ9T3iHRB{vyWOo}2xS91;}i;eZ5OW(EKtMzZOvHI(E#?xnE6 zhoX+i8&eTDq3ROd%bJa4uD!UG-s$xT^9XS}E8ek+A|n9hmQ{(74LWyVN(yf@k{EL{ z2Y=C^u5$9Z^C!da@k{DwoFy1YQO~pH)S-I^2 zPg1A`>Qk>!?R=HBbi45_eXm-Xx(*@}Ou}oIYhxjVcs=F>Q2e`LnNOCEU64sKM)9jY z9+;nDSrp#7dMmN3be4>q*Yh@JVA$lmb2(c*CE6#8PhCOm;4|j1@wLOZ5crcyA4Si# zAr_J}1L%CQp&(fGai*>xfNJ??F^$^s&6KsiL&%i`j#%S{D}3{Tj*=0*JIJ*kY0Yr8 znKGeLS)-xu{?2MpfKic5=bUjd{H%R-k(TcKUZ3lj_gUGc$ZT~ble2ub(Dz=2tc*5K zr9XQ!<$9`Fvh~RJFEO)~eG=I+Yf|xb=w-d*h(~MF&7SaP%HKl?-Vht?yIt&dsaQY; z(JD{GDmk9*Fa}izi)|ba5$x+)ndoz>U%f7D-?#Oo`nWXZsfA$u$&nRD60Y0Bashtg z(oyvoyTSeTgr#BcJav9-VJUM2%~}A+^-zPh6(|V4Bfio$eZ-F~uEaipzu;wCw3T7^ zmNiHD_jyA1>xdy>s_a?m&3K9<_?KGnp7`3H$H1G)ks0y+vZmuPk5mB3R zuA-iA88j_)qsUtcwvk7hGQ1T_r|&Aai|bn%_q#RPzhGyOZPpmHa8St*%Sj$noT;4J zbOtnHvO)0+OK<#+m=)$;>0fxfW!OXdaeA)q-Wi^J<&0(l@;K}7@hSqc$7`RNpk6Od zq_1u@C-H>+mdn4VelPx#AF5Bk#JO4OIknEo%wKzv2bXDnKXHFlrh2VcIY*m2N6M;N zbp-W1iFBpAaE+r~US~~@m&b4>>WZ2z#QCY>i1Jx=2Sn?0{+GSq%g*!PK=MVyr%L8s zD`g$?zQNw|p=L(>;dd0m`$ z8pvW+Jj$OkuI8|g3yoipk(hUOYm<-wf18Xi)h6}2NtNN&9w!Kt8#!?H+hFnei-Ug0 zoF;5u{Atq?ewRkn{zbEny^Z8_im0g93RG5B`(mt;W2#?{oXe`v*tSv;9q-djH8ZMghfe}ev`prEMrm&p5$HjF`_m^p zedjK-9b0gf=+84&OP0{(2U>Ytf)@8OcUq$d>ogm<88*o_xE?k;mChAn%A;n7kOcFs zg2>;s01}w6y($OZkgZL3&N0No2$%8QdGRhH_zC9Y&hnioBa7EnWJnEPf?DjEQQ`*c ziP#-e5>d^F{R_=oFCo3S@wsNKX&F~MIqlh~oyHyW*HR27zakesK}j4j4=X8qs}mp1 z8>gFVg_;9}wjxEim%E+U-SS@O8hB>fL7Stcf_#hQ*L`si7=^2t@#36-YcUg|4v)6f zA>XqQEK)l(&)GY5^JVYBx_eQS7OPe@Hl)@{Zs*w>U)gFn1k!VV-JOVmu_~i2zmKrV zA3zatEXR;EWG|q!25dGZ*eZ{sd87UT@!TDW+`jT&@^iKT&IAIVU#Rkz_WPW$zIA^I zsfUqfL1R1$P%+7f)Wl)&L30ai*05dp)_^OzjhK_jP-x7?YFxYatx}vQ$$Z zR^%(p23@jjQJjL90FvC0_9DAlW_il2H)`*+YHX^sW*NNz&=9PHn z?T5j=*`>)&*qy=wWObiz*%uu@uzPxfi$%K?V5kHcG2@RMSv3n=&`)(UU~>}`89ie= z$RY{8VU((K(Y!|f%iybO2OzS{5=sw@8_Y?<+Nh;I+XBr{4*;?D|r0zD@6e*f_d&w-WB+9(-|2z!$e3@uh|!s9O=sUaa+6 zP-1NSTPREao_DE7TA9hwaxKLV&0m;Rgmz^(k}v|e(OlZGLwjSM#?U~9XK949N#bcP z6#M^l9^chE`?%srV&Ns>spr=eYSw?%qTDm)_U#bH@Nc zydb|R!^zk9DD{bZ}A^a(*NQuRc8s1pise8q9Jt{>Z9H@SE5=zl{1Hg)%GZO#IBrmH?ES_&1;R^ zlD#%mZV_^8?D_M?gyj`|`0zKNwA9T18eiFOVJI+ErS2I@ zmhkK^d~iUULh=VsOTnJe{TrRJZb8*a@+g)~h*Yq1zrxQk4fJ4PeXC_=buppX+kF2z z%?);*GNLSdjY>D4BuQC|hn+luI$@3mS`0#B|478BkDURC%ZKAq5K4FUgh9t@<~8e~ zGjP@phAuXzqRPm*x(ha+CBwWy ztO^c4V2hNnb{s-OiVuuL=EJ~TjpHg2r;LA>J~1x)waPwY`@*AMryeBfJwWS>yPhf7 zyHIsQgWICFPF(>p4Xj};pF9>3b8lWekvq@!KZ%+CmI<8xkqPvEZVzzU&6!9zF*Q<_ zGbRNz)lpejt2;$Oq&xq`KeR5B?%EiYdhyc+iLIO{2>CF_)W8fpKAGjZkgW7Z?a*I1hPeyZgC z-LFgn%8P=6l~Vts^5U#KLf`(k=lS_*!>56p zM(hjBQvclF{s%Jp$EW`PYTzE+g9RNVWqD*G#(HC z`{4aG5;a~@b)=j9#J>HzsPK;kq~gAfgrEQG@BX(bUk1qMcj$iVul?KW-ys3MOdA9E zUo_y3{O_avJiz}SjoP($R7DTopV9h!*RS~#Z{#(Nvez`X}r5W52$wJxvbAU8L9wBUXpau1nPhb8P5Ql9D~fqAFo8*Hh&l= z5A(ky?{_B;RnGpG)aNJ0&u>dtso;;liuQ6>+rWuyqv3MjS@hB#I0L22yH;(n7pLGh z<8-GUb4Lh+482o-(EMOmz9$=rMh$oNJlerpYLvvc-$BIqU3x{J}O1> zeB1HNREm7q5hqj_KzAJ=oC`EZT#p3)rIAIvp~&=SsTs}VGV_Dk_sZ9ac%eD5uh=im zk$zEtJXkAgsrG7$&#sbB5^dveinabK=iVWJqA)&M$W##Eq+`BnhgBJcUK+{-EYD6< z(+NUC=(H9FzE7xA)<2rN_+aIvO1#qT_snZ^zkl$ONSt6*zN{g9zUgWdYRtUTYHbPD zoYf3_BU4^6RQ`CwnzO8RAS~&5k!LDG4%A_4XL#@%_lLC-%03Yy5NR;t?C~7mj-0FrjlRLJ2+zB-+0HF3JwGy|7TF=EyN{EH8 z&0acRyZ08V-u6?qCcN($AP6hqYr4b(Xs9qj!GO8g#WH;};gbE1E zw92`;Gg0blLx3+gtLXJa8r?n#A_Mbn@k2TX7Pz;#e7gFIARsZ1 zbZHKg5B?-i*;epc_oUpIi`rhg9&tsH&Y)%+0)|3StxBg-Nd*SwEu13O zLlgJ;OtiVac!5gHDEGU=iwxGX$~@PsAPHYrWyBm8C3y`iViP5u@8@B2W|Ls|W8VYJ z2is{Ir6lR73Y(!TT*i!q|GbHH%sc=g+jwoZ-Qvre_uZ^(r038_jc@FJrSl)QeiIKt zZ0y*YT)hrK-kx3&3pU#6zW`JQyqg@(?0yKQSyP@7gbw$P2wR5QC!VV{(u4t$KyGYF z!pto^Shrg3id})h3@%)v+-pTn1vRObm`^TSW0!0zFX2jbqfZ`)s{?Ik8+j_Fv)JMB zr7!}X(bKWukF*5P-@Q$5BgC`&teBR@P#}Gji(BN7nNO!Fz>w8yKCo^We!a$4lEo|+ z6Utx2bsx~0s<~`C-Nc!>rGh@YNOC4{M$%RNP|FBZkS~i;^-7h2l0O01q;l%3w~Mm< zgdch@43k%0<{Kl+d;pD`GQR^aWUg+jWNBC@m0%gMf1!ZQvFhILCXpYoBnUZ64?V?F z2R>z#I@#~&T=j^huF|I4Slkh3gGFL9pNc@7fD%ebbf-M0Ri}LF{yP9x zoG2VaDMuA6-H7?>`Jc#gQath1hzi?Nv%+duYk6+Ccg9u%-T`zpImE5TTpFs@7LF}C zzXsAb(6VkZV>%o4nHLUlzA9XEj~FAp4V6OkRfe>2Zwg zaD)KOta4D7?BSQSv0O2Vsk!H3sh~?I8rlj-XrO0KIDCK09>ZD(X7QKzIZS830k&wI zU>F{4QkgI|%=J33C7kQE<@dIWn>|xkUZPgMv&|$S@aX5%Rohi#IW?g|VlPz|G1^Z` z-v>E7XRQ-3svhxRSEgcNfd-lXXDCno5THzJwggQwjaT$IER~6EzG!YkJZME=49Y;d zZF2bA)};mP8+KK$-Q0YrZd8^=Z^Dz$^A~xJ@sQg2KMmFQ6 z=>1B5l>sfqnXkzQWp?#0Ly%3N&=)d|f=4h^R-jtZDtd2;v^`?c!oG&hVCvFEOBH3D zecVi4yjmITo1u~7Ed0dg3t+BD*yQkOUhNMaQc`>0{JBf2$Hj`dW=fO1(S*}hs<5?6 zlX=^tG9bjWebFIlbHA$g9*cJS?~yY0F>mR{g;|<~Jo5s4^C4?}n1WPtIG=(SFRm(d zRDZO;TYa|5Gan>?EaEl)p3r)eXN$d}KWNyhBjy}fOU1uhw_za8oR~3cw&3PO;vO<5 zC#g3BJw2CZl;igdTinobDvU+==ZV;%&s~mYemN?o=dlS8#_Vu4Oh-gTmUsJpW&S&l zEh!k)L7fcGPi(u%N8>vL({K-HWjW{iK#l=OJ%thP?(;o3&SJA*2!6(KE>L0SVC(qM z*+P9p@`zin+UktHDUPsJiNT-b*Hrf)<1D?zFA4#IVxTwlJ>g3OMv%y?EY#PwIFa|X z508#rxQuF(b95p3t<12L;ZJ%m1A-`zL*m6|-dr^I0&A0Fp$*U*$H*2N4WB#~Cl(U?*enu@w7Xc|dKjshfI9=F>SGkBpV826VVPvd>~f#vB|Ym3%MJRUk?)W}* zJG(=sZGWCsP3@?#^W+;R47Lb8Pzd>G)gG_(@r&XE)R$4m+RNivc)!C@G9fe@gOT6g zehmQn>0Yo6=yuM|fhUoKn4Z;>Pyy5hLF<)g$#-X!y00kCkWov~z%t)TIoO-YzMbBV ze|`J`V7YqfDET?e2$0SZ#Kz_6$+8r_Ob`X9$&O53SDDU-9gdk4gw(A~{l;qe<$)<6 zc~V6`%Xw8m$ed>6>D7-ncUhfF9s$Z`OogeMAsk_Yf)_fI?_7s%GJ0$05zEf;-(cnh z=a)}p0qn`veF^!fa_;cnT&qrXcm#P44vLV|U}PKFhCUUsEjx!Sh3h*I1{^5uIx%r( zjcU1VNFC2K-!~+;gYWONhMRIH@K37`)>pKSROV`y{C>TC^X+ z^i~%G-bxBOez+AN{ge%8Lc1@~_bE+n0SBsJz%u6qr1U5I)d^yq91g0$9<@)`cC|`O zzRFEjesJC)q-Q*D6!$>YaXOA5Et(^hcJ?4q_L)Jnj#r3&$ybK01u9w(2&j4N&@9Km zDm-OpG2w66Ki@+S4EC8VD6XEcuH#x5Ki_V_b?G}!7|_hCs|O`gP9u_|Wemt5OxV{C)Jc^EGig9-#2^zswTcQQ*_)#P9-JE+}kGS7B_kN)X zYLisTo$16-J_T}kbqO%;Y;Wi{WCFo`dVm8!F6fRvkCV2HFZ{56eqUw{ zu;^|&t8s^yw>{?6(d-HQ$(UMSQ$$d@3sYk2baP}TVAedg?RT)d1$azT_)0vTkBr2N z<*RAxOOJRJ;`;WSDfl{NeUHWM5}U1X|42Rmg%UO|4fh_tuKC_9??&jbolzkKY4DID z$=5?<_|a3HSG@AWc=Fgw*&4xzqf%9k=8d6oUGM`?(~O`UN8`yjbnVO9O#`?+x5`{Gf=z7`}w#144hu%}bv4jKYJ;QhrbbGfXu@?Z+J zrIftDyeL=gM4B_TY%~ob?bdMu2?aCcS+%Vwv$>;mwJ*Y6>>hs1GQUB$0l7hGyw9NO zG$46Xa#(bSX=s1Xl+u5C_P{@yzQHYh?wAwKGRKSP_I&9`x0#@8_p6BV`@C$44hhIQ zox=VY<1E|m5WDQC_tNZ7(N8=Q_MQUg3zSLBysT7(V^D4#zYy6^;L>*HB@sr-TEBz4 zbgf-+qNr|YYhVbSQJuz52*ae(c;^soXgu2pk!^2teo+)x0p?ZB)nJwN35o>iBIRk7X3YS+#NcQc} zdCOcLWoKBQ6^Q{coTrWZfI7-O?L8Z3eI1a*g5wJGNW?|>Zu%0@*WufT4!dQsw&0^1 z^Vumcj(iY=_SX3?z5kW+BfGZPyLpsE26!UX?hClmCDE^4B}#~V2R;K)(Vm4wK&iR; zB?a=1mX&q6qBgVAu7}>wt^Pw4zs2%6Q*a}P_o!F9p<dq?z%@^_cyoT68)-B zT+!`xZ}gaddoZof8bP2WsA5-*zf4e^hW<_3*}6*Geq|Hy4R_*t9pHA?lc}Pr*`rCu z6u)NBEJDt*cIOoni}Ee=6HJ!&;$<_V`1NpnPSr;z#BiI~S9Eo$ zX03?9F2$Q71!$7yO86z9b6_xITPl_$Jn!L(+ZfEje+v*^Jin^76hjhxt%k^vmYY&w zlDme$`vs!UGmMMA%YTv{;Ilb@i|xaF40VNYDR7)vZE6S`#1lQBcuNzaHwn)Uy#oXO z~8bL9pZ&J>46jqMSi1C+vM^RKnq$$C=T~wv{|j}y7qJs zJfzdm#X&B>yf173&x6ku4*CsR>HdKdb3av;z&xeWpDj8MZ@n+u@dYY}7!{1J%JnBX zYQ_k>@QyuUvN>OH&dvjUuBO&&aPYZMjlgp+dpAHY`)cW8Bje7&0g_B=IpbUb2dsAWZ(;wBD5(G~W2Y=-ODF`xTxb+C~5<23n z36=Mij*4Q7^xl9`)v7$C^XjjU97+u`&pragM-@>HDprTGLYoS^OR6lfMU%>i-pl4I zc$GzQRzKti8IXkeuBy=wB9NO=(mt~`u77mAUSScbpKCZ&gykeRcr$RZiJqkAa7W@* z8!dau7hkKz0KLTQg(tnYlKeNl(xugP{==e6(cgUs(12VC>N4D)Wmpc)5!uzv2V2l( zK97;=xsjuUv^@`4HHzUrNT!}+)c3;jL)gx0s}Ndk1kL`NASImVv>Id?Q*LnUZbHq_vsJ;<#7_%gI#KbeLB^A{{E9ETzlg0cCVIM zg1Hn745KvZlrk%B^xS8=qLM)~q@@#m#=Ic1MLqM>hwumr0qRJYk0W-tgv48CbBLouB^znk1d&=W6GEQslA~Z$jU?|skW~rt9c5z4) z;E|M&%9!1Jlxckn+Gr%#O{K089-qRy6yifq#<}Tk`?|WO?c{U0%d@yt(;2Z)ISy9U z`wSS)3&N4MokkGUH@1#GbPBM|%~BTLr|&Gj@mk$V-}CIctSc`<;WCroH%m_R-5h!! z@L83#6y1}loRF%Qt++p=PLiUKudDn9<2hXcq@{S$q65sePLBI9KUq;w8?@r<%-bYi z$xeyF(>fUH&ct^dW-18T28pb_oZmMWd?~ZKcEn(F8?TUOLjxkGrE;n4E1Wm!gXf;h z(;7&=N{K6l=PlG^ZK#&8uzG!w*GM8~G~BArlS;}~3p6|l;bE;quG!E$ZRXwxWux?P zenR?=y3XgbDSc#RUThT;(olC!u8uriH(z5f4N0;y?9T#H9{X9RHADurWYW@7grNQ~ zs8Z`mAcMOa{}Hk95%bw!jIC$s*0=QekLt&_NV5Kf*4{g{eEdo&`*St-?!0%SJs;qN z6j(j27BgQoV(-E3YQ2yO(MG!mMYi^H>@BIXJeR(#$Kb zs1bB}xGuThc!cecCf;OQoUc(gj7l@j3OxRj+`6}{+1Xym!u#2cfF+zoo5&8(IKlQ; z0(A<`?Mg%yX%s2ay-7PU?ly*a4=7Y3y7Ed=J5R36mb)$`=v-!Qmvs`{c}^r|k2!?(gC6tFa19ed1u!j`UTNekUyGP0 zh={{qqnB7uffZ9akh2gaP%-&z^DrnuUbM&es_*w=(Y5Q0o?h=RNx3`#v=h@%Ls<$O zwk5KF$RTp8)_Zi}K_~dc`1%9h1J&zZg#N~(Tk{( z&XeriA*^C$x*3LwMBJoe3XV^w@Q#}+bM@Am(L-0*#z>KA?#^OWZ53fFt22D9Qy$-; zA~{}v#Ie%X!#^EE-9ZZPS&E*fuIfu}AT^5oHnoX>C)mh)WYY2zUK%ILXZw|d234;9 z+sO6z-daAGej&Ap10#%z@XOv0;u4*)05u4+Gh;Q_*mcROXtL-zVlyK9!IQmY6$m$nm zSuiE2Gp0-eUiK```z(8?Rz4GKbgLW`r9+jiV71T{*IapY7!#v=m7AMxeE&d={1pCE zp>-K@qDrXps;|=GZr5J&m=(Fa!{eO-`+}xXhEOeucCG86gY1%jQBa-VN3hW7(MU|6 zqj!ciyE0_9?P7B>loWr=s6MOT#4)sJ*>$GFH{j%jTXnS7)$O`c$jjG#V25^y;$-v1 z*wQrSCq>|<%+wILN=uAue&qhD34s?4s^hX(5&O<%|0ZDOG5`3vblG^uucJa3o zX&O+M4;s6v_6Rs4%qQB3k(n4)w)avSR(I-7Z=!7(7Rl{sxSgd9W9hQG{q1`L5X>cM zUVsC1p)IO2OhYGbhH2HfAN0bsY(^UNoMX{Se+ds)(8Fv$Oar+IBI%1yw6`yBOdCY> zi6LL_uVpwS@4Qd4Yj<$%q-lvRyBAwKU|UvsHS#cJweOe{MOnIVw|7C_K^mK|*-&YZ zbit+G4I0S3V#o|E#O=nb%k&=CT7Wh6(0XE=Nxr3i35wG8>fA&4@Z*I*YE*>xC4W}B zPCf8{?62@bUMx6Iuh1)vFKt<<_Mow9K}aGdTE-8kkN{RFcMgljR<}=_jY#ZmCY|F@ zuiE&S48f2*&PI%Y1K;H9q4ScmRS4d_T$9{t%KJ-~p!=`h(vvhlt^pEog__-YSCcfz z=!;833yE(kO$R~RKs;U2GhCTrav9m%PKk6?ZQrPVw!`7Memd@Xz^H02utEYYNOEmy zv`=<-G)2VLB^aOpyvAE-R0&sjzaY*|_ayLvDqnaHvPa96DOxB@Mf}~yR)avc_=Dz1 zo-iPpJ>wm#=eiqsu|Eo;=4sRt3-lS)R{GZ}E3ucAK~ns*5RM?WA3UU-73MM^-@R6> z-^tP}gYz!xkk*eEN3*cj*cu_o^kAqjYO1K-ai3W4YXy3n3rpz8ts0Mf>!F*Gr*&9c zeTgE%TnjSgasOex5~E1ZGNkZ5-jh}AF+C(ft#Ya_cL~=oeJ+Q-?f`qos?9EI!6gT3 zv%Qzpbf8AEZuHj1-ybQPfxpr(1~+wa)e2mhIFiV?lYqku0KB8_#H&xNU;M5L{RAk= zE2>4qkM5@i!Co{LWG7#lk*@q$lc_Q&F}N~MH?%(X$OCCi5Qq0Dn-+M^v`+;{R>&M| zg!i5LL>a%CNmq46tVLx$U*Skx!t1u=F;xdWYfTL{;rXse>0_2(XNQ*tyVK8Xea#oY zh zI{ZyHB#80EJj6N~7J6y0wxRZs79dn>BTYya-@xvDZkIQLQr|O0_ZLqbcp3QPbYP_VJ3rUCI;hcJqXHn5Tcznc1i07G zZR|R%m+LrBaEckf)4s0DgsIP7SU26d!UxV=-vtmOd{B2xtKOhZrkaZYa_u2qhUkUv zXB-U)2{QaTx3%xlu?XIj=82>g1PmO?Ck(f5@8a6^S-|ovRvFwcQ9H9n8I2%Wi={EN z)>2vZ$FY{8piQp^4bho)UM;x2l$6_CxG@FOy6CzsN5Oui=8{mIiZS)$#hO4v5Z>+a zVwUSU*ekS7jzE)_Yt4?2PJ!kV<8fTMwGc9ws0P<607_zx`Z8~lYai-d=o~Mz4u&|VE61on+zbBfLKN3wx=>12#N2>~5Sz5WYcQO>jZI@kf7mJ?49E!9ie!4>yG*}>InO*q zUk~(Hs#LgRdm+oAvs$f;ypii6)uRB`p3wWMQw{Frx`=2qx%`NUvy?cJ=!#*NyrFU& zZLzh&q=H-`3Y6Jx=)1hYnn+EJlxIFjmo}jp|6pwz_LMh^hgT;z*x1ci#9PjH+oN81 zXm7@ zN-3=1%F>gvGY^|+WlF4gvYRT5RKaCgf-lcJShuT?_ z6Ry86Cu1?)kF6-xmBoz$;^eI|5R5q?3s-3uBU+O-;jDm}FL5q3M0^6QT88-DT=nMZ zw-iww*Tr0CZ=w#~P6##e7uVRwY}_NDds5d&RzB8hLS+I2&kAmDZ%>7hZiP8MRW3Xe zw@cPxqb3iQHt6xqOBr8Cz4x$@P1-9;f%Qs~ug8-eG76IEFzg{b?teu5QOez?4t?Lf&MP_TGvRU@GQn2$u= zcxk7J@!xl(j81WGF4ubR4L;+o{89?I4r!r096UFoK6*4Mu|NB5xPV?7K?VWvr=6?ALLWGdSs6f6;CI%S z6`FtWk&sNLzG9+SZx`%&3h=gp93}O!fqil)`4vtb6wdl`UyJ(AFC&0daV0LK; zVJGd{z_J$a)do%%kwV@_X54>E=Q3V*gjj@~j+bBW_5Qr|yn9m7WsihFg09Ft^1|4V zi<3p;JJnzC&@y^{#jpSB){R~z{svfvDc(NTDc73Q-hI)h>TcgTIwv z9VD~)S52IXK9Txx^yydFD|nv0E?|?F?cs(b1(>o_(CM~%^83 zNVAG7bTkoTXmD%N&{rbJ^clGwG0v2E?r?Wbd>>4?JsG+;dh&SQTK~MXSWeH{LqvkF z$JR~g$`!5o(9JRD_a4GRlYyZh6;PX}>fSai|Nbd~lDiZ2uI2usfvf$gtR#S@baNIj z%-RC(O}!xIGmL-~`xY{hgCL**p%iSfDFkzKz_YWW9q50*n6@m4Dqx~KTNj-(M37Z@ zb*jQYVKQp$N2;FSj7M=~Vh75r-S&Nm!xDQj{lvtZ4zcWhcbq=Mg$m)vf%&~cYa8_T zg??Sj>ugPd-PYtVy$UovawXPx8Ic|4XPq7Oy5jya6g z4C%y1RWfUMWO_bUqsra`2ud-(5|kpYtKhb_0SB|`Xd}qxC2d!`G zfmY@Ro7nV}tE1Ly(e0aSdHqVbB0KLSc3Q2N(Y-)Zh~ZI5F;lc{B^tr{%8oOX;@3xM zBjM1kdT)I<`t@;d->DrXOO@jh*aB?n@Fmy2V3YJMHP}PZlf7MiG~?YA)wdY1i#*0q zR?>R&5?i(1Qw~l}$bq|`B>QTm@8U+8RXgj7ezEOxuDlEmdSLH}t7+CT4lPfy^j&(q z6ocC%?vlwMTR9o>g{^{qFX@b2it3!$Ji4nhYVYR!4{#Q*(!KB4?YnUS?Rb#xZ|Wy? zLxa)pN}g+#-)ivh;3Rzp&;U_jAQC4+@ZDodoI|ni{BppF;Z`g9WKh0de5GCT-4T03 z=B)l_l&mOCLsBKAL2qYv%5pV(Cy?OWzHh$#>il73IbY02`)tCxxSiriv;n;ZBKJbC zM;SGs$2XBTW4~drS>w5$$q?x`wjSam2F}{|TP^{`j5P2{sYeQvjGE$D5~tcjvi1Pt zAccfnZAHjmlit@^3mecXWQ$)Y)_T9GpRyEF5beNuQ@sMFCDd0n9p!%Ah}???i7xLw zn`bqW%}I@gm-B(_$Cl`fKm<-|t^64`jyqUURU?UxefPB7S2Ad)dk{uRQZb``F2T7z zfg_!;Ydd*T$hglvb67OuiWvV~nuB5K1&J3Kpn(O9mP1kah$8(#rPz7}KZU;ggA{{h z?n{*i4fKCh7yMt+&H0!r1fCR|(42T6+t-z2U7T#vH7;H)R9pE$G2doA$i{*7jeyv* z)^^03hqF$5#Bx3CS9n;ZD>rgucV_kL`?EDWk;cqDfHjC(l;Lq;d0=E3ce4a?Y|BBB zP8^luYe_64#~qtIo86(yU4^}bRw0Zrt?X{>bUWMbYdKK1DOt#%>;ZY9do)0e_5O&> zEWfL(1DNwssE@9++nf!?SjcqZjBK0u){HM;U2~S^lOxKcvp=i)CQw>-jAEFqkah}+ zTjRqKH`LF-yVAY^T~V05vp?ueXU?3-P{DR=Rr5`K;huv1gw|P-ntTo82BO3z(Mx-S+;4BQc@k7{%t!U2@v@k-wS@jiqpB}n0 zdS7PxJw3cnJ%7HgztE_YGG*79jOOWEd3>DtmcsM7s*RxY0-_0rltF5eRxdI?cL;|_O37t z<;5$kjU&djhl&2|p#mVfIosOB+x2 z59bsD;lbMIPFOFo6TB(oHI3*BPo3rDr)0S{fh$LDmmicM5rQ|?ml}kBLLD#OyNju+ z{i>5;W0X*`StUdm6{~uw$S?9-<||GpwKC6Y;>%lJN@AuMcx2lQxj<;c&G4w-qX@FA8_7T%*`pq_IoN4dW*)DBH546?Y<)Z`+mGPzV=y)ywMB@`hdOW8 znykr{*j!1q5?zlXp-p?DO&@9|3T>^jG zO$Wt&Wh34EeiOru8HnRk{lZ0isISo>cl7U61^-CCa7W;hL(Zdd7mtb-Ct^v67I&ruYb*}JLZ|b$mdCoT&%I;H0$72Z3A5;SW$eTZg zZAYG?^f2y3l1dcc^>1yGSkKkz70U5jV%Xfsd&H_OyzgEikePJ{KTGa!+qdGw`$Sr0 z&))f~bJZ`GmzU^23F_iPgVWZ&0+5{Ou~XaD#{f%UE`|k(@LdMhrf)1DdkkQM+v+Y1 zq1@R4vMoxSgc_lI!RL*>$^@+S<)PlPHI|S(eaUBj#B%KSQ4o&1yG!Fl zt97{Wm~!X5*MMY4jVOQGWpU>OcCc1TIc_{;0dZq{kom8J_SfmYJDx$GVMmqyEP-M4 zK2z$Pz^5;n#qPSy?(T!_QC8e z3f%-H?~SXS=f5yMF3TRH{?)MOD}V)FeEP3f(@>wU%jBV1`ZvQK+>f|$Y`XZjjE_G% zWdFku`D60`hjDt00^SH5gVdi#-`{eu|1vDeR7VmY)l>hp82^_Ge*G`_2Y(91W&P(< z{OwnDz=3J)#?!z3*`G6bnFK%^V2SCIe|M*!zV{zzqJQ%*$^ZeaprnAr|2rzozg*>C z0RV!Tqv0YyVPt#Z5nSx%gA8{mo3)pSt{#{=4sImVdr1?g0>gkt2@R`Gm42ZdwJi69ims1aTa{kcn}2%xBF?I^j7?Qg_UpII7S%8VLfDMm zz*ID^(D_W<;1*q&>81TQ_Z=VhOn7YIR)#Gl)|0e-;=D;!7cAUNGhOE0Dt?&lvYEM) z?w{OU$s>C+Q#UI6J?C=qZ}JzceBQn`PHna|3-Y;Q(*?v*hu#v`obHg)4c(vH>2J+D ziEPrnVzojp#ejj}EvN>J70)RP8q@E<5=<2duO)&8IR`1f$@@ZIftd>IRA z8ura25wz=CbkRERYqu}&P1sWa_^8rrdmySSL*;EI(L50UoRLRMF7 zC4ow(eY%e`%8Z8=)HIzysX&f|%8m#w-KS*d1Y8h559AJaVC)^v~0iVT3rfY$v z8=YE-i#Q%y{JP{WyEj&;Ak_t5#mlSZ^}?dkz%<4lps@qzc_0EPPo4G82K10n3*`HA zi@GP-&q^T^GtPGDxxW+%d{cb>dOzq1^PR7Qzg551V%VozZl=C4kt84WKqq||P@KQ~ zVl;PZ-!#_;SNalQm`vHkxe5+>p;>xi$=U+hbA9g05@B-(yJd;q#l#MjPrd`*7TldP99>~_zGq9v}65Y5gkC-LJdppO^QAf=HTE7v-X z>;oc|Pah+=^#J6jy{dRh%oPWdX)Hg@1(04U&IwJnZ!< zxeMEgOBt61^)6XHFojpn*zOhpeQh3}T0=Ri=X*Jj@@&m2ulO|KN)ndpe?DZ?{@Y2D zH*u+ORmUE9BRcfa7>`JQb0vaVjAxv?e?U=*w4SW&PqoPwTd#1&2mu`nj0xAwRIfKr z>cQC!E`kl};(+&7=)#a>M4T>5?^gDIb(qOU%o7i8VJ?&48xW;#JrgOFAV^-B8$lij-v(6R1}`PT(<);z@z~A_0-T*pR}3$ z*a`#LL#U{luByMYvt>Oj*V3|qA`(M6+=H`S#0s-#2abNpx8Tgp%h8A&H486Y>8Glo&@LxZ6ZS^L+U&ZL5#=s*%DjV42g1>D*Z2J--&gST z;bESjnSJ}p&u|A35lm%aYt6H<2f80N+WL8Or`zSx7mfFl31hq=YK=z>^xYhQusWV! ztXfzkui9_{F*DR8D5(xu1q3ZlSZ+Khf4Q6bF{`nb4v;J zt9xtvU4t0GBtWkWJ~3{$rbBiPaHRr-t*@Bw%n6?Vd813pa zP_7xiI++V&4YAL|ANU{$0G#)XtXNpsSj!dAai;6WGQfL?NFNL! zwMr~@cSW~;PRkY0R~^X^JTdS&uH7=b7zE!xm>f zMTVy1c_56I4`@y1DlnC2*7+qlBXTpy7YSEV!O-y5ozzK`nrqy?V^^j)!L}DqR9+be zCNULQ&K+q%KUz(yUw^fp(XOZqip1HINOQ>0WLwU)m=_VvF-8cFU%?i_g94bhq&T*& z%Q<9@;0etfcs8^tl%IM((+yJ@jpv|a^ zNByr~s>EB;-VUrVoC<(Gj$(Ox32LG(EDS9BDg>0SeaKRI*B#Vz8og=GhWeW--dqEC zNicAI057jk4=xnXh}Qmw+1j_l+UtDg zhS<;9BoYmKwyg=Ur&tyH=lGykKQ-m(Mm>f7N@*W0IoBp}5A=$EvY$*rq5<4F$kFp; zYE;g)Cj;A}s_o5^e)XOq_4PI}vNs>5A$aC<78S0eaj5=;w6qZ@?tApLKKhc)4_z|> zN9e>$y~M$gCha^a6zJOsgK_yxT)h)FK~kI^ihOqN4t86}BU93*xBndPfMh*4C*9ql zqiY43|2A6S%U%Tx48{c_-9sCrAbE;bvGUjzCnVgVgS5i z2*%b%jHr%e$tMbi+YD)W)C4oCid?v}LiNj!2dt}gSjrL3v{ERJ<^fS6b~6WIK48cK zY-IBc{07v2=D06Qo-kd%^SNO>ZdL=7o8of~a+2OY5F7lu!3!COM`{(B$-9zk&BEQw zXfF&b1AuHy!8!N(7^0)+&pHNg8(O};HxJ+=tZrMJ?MqlYJ>Iv3U?&ki?M56%P0q$E z>r}U8Y#jIP>KuKtai!wW_;+=;!|mIPe>rw{>A^%^gf-+V@b_7M@v69O|5DPFh7aPg~iejUZE<#^BLAs?tb<`hCRlcJ0@3IZ;oqK z-i38~o(vg<>y%1Ht#|C~TR)Pb{ditat`@{vcLD+)8IE`oAA#x^X3_kXs#-8wT9W|7{QAS&m;#6TPDAmSEwFx~EO^5G4A ztxJ7PVSm2h{)qlJ!nUb)W{|DJZG)vkQo(_G`^x@zT;_q3A$^spTT$mc1Gw{_+c6JV zGZFF^LWV4KIX|n?4r0BKfbto(Oh}QjNY+b)Ea>blrnVIImwCijRAUK;Bre370915N zxvq=#_O%Hp#g~N6BhN<59tyus?WQB?^3}97O9xPa_E2~Y71|6rKNG18|24IM831B zM*pNlS}sG2L)+++PZ>RofD(h=M&NwS8X+@}a%EjB z88sn9=lzNfUg<}*-)E>hp8xF^H&qbIGzhv8;Pk2BqQ`+s`JVTSBCUD-VIw(m04~C_ z3Bw*ChVy}Wc9rLC)p3?bi1cHpO0s#X!V}0KLkJO9d6B*qV{zF>JJE_^{D|7z=-<@v zzwQz}**)Blr{8TnFUr*5Z0kdw`;Q?P|5m#HnS?29*W>?fN$k7|*=E5xf=tQ+ApH6L ziyUH=`=sV7z;UP(r%TMzbR#*fikpK%dSwEBTo~1fPl-B77FSgg4-QVS7Reb-};e|aQfFyCOCY>lrqhAj6xNn+WZyU)3+S>X`Y(h)ewCkkI# z5TUFHvGHmZ#OwjllkmGpy7t-*wLp*ldiO-3WRn!>FyDC!g;GHwlh{9NBl!x)!km zz|xNy)b1$XX?l9x@kdJMV15H~=6FsLb)ou3$C!SKPrwH)+Q#co7B>vMV&Q@Bowduiq9(Q$TkYzlQfu}CN`iC7008VzTY?KNoq zO)QxRonnCS#oJYRciqH*C`d)V^RVAI?pYqF2@Yp@)SxDsh4Y_car`a%ZV$EFwY;jI zA4&HOm~IAJHeS$e+TovKK@sxURydZe9WE_^tO)mKWes#<+EBxh8{AP`Y(PfbCC7d^ zg@HyFe~~usX6y>om`B@j2yUt&L>@=bTIG~M5DN<9!q~cj`tCHT$nM!2gjbH2@w+Wi z#Ltg6P?9;n`&3?DIc!_j+;M;uc8qDJ_&6ll5zQ60I9SM-QDQhWPDCkShfV1FVrSxb z(m7tL`ZAmbIIXjsr5YvRT42qO%p;&nSyo)Cp>u1&iG>bus6jPwlJsL$oiQk6zs(Nw zOr2I{{Wl-Te+aYR^2uBK`Sj~-SZ{l+p3HJLX()uBA`>eg%S zV=_SSK4@T9yiN?4rNZ(mD0X9@Z-EJmxn{D*ETpf5BF%XP`fuaXz}3b^FhC=GoZeYJZavf{@vqq_p&|r?E}gj z9M_M-@&(eOIyd7lqlf7}oS>_*l3`5fnI|A}NRLOd1jk3G$0TNYPcy(uMD#pO^ zZDXS;;*^XdZ~<)zE4^O8EYMkTDP?z1|F#?D zRb@~~xQ}R@R{HyD(GyTc-Ht*|AN>2D8G%}5i&yCVjsNT^_TO^o))x7}0Tfr-@v$HN z8+!P2vEzS#?k}tTzm@xT#D05O&isG9a~ncRN|1Z&^LQC9@G&toKU=JS`Sw2m*N+95 From 0387211b871d5ee8787adc2bd04dd4a2099d8429 Mon Sep 17 00:00:00 2001 From: Melissa Vagi Date: Thu, 29 Aug 2024 08:53:35 -0600 Subject: [PATCH 013/111] Copy edits (#8117) Signed-off-by: Melissa Vagi --- _dashboards/management/connect-prometheus.md | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/_dashboards/management/connect-prometheus.md b/_dashboards/management/connect-prometheus.md index 7f5196e2fa..ed5545bc56 100644 --- a/_dashboards/management/connect-prometheus.md +++ b/_dashboards/management/connect-prometheus.md @@ -34,7 +34,7 @@ Follow these steps to connect your data source: 3. Select **Review Configuration** > **Connect to Prometheus** to save your settings. The new connection will appear in the list of data sources. -### Modify a data source connection +## Modify a data source connection To modify a data source connection, follow these steps: @@ -44,10 +44,10 @@ To modify a data source connection, follow these steps: - To update the **Basic authentication** authentication method, select the **Update stored password** button. Within the pop-up window, enter the updated password and confirm it and select **Update stored password** to save the changes. To test the connection, select the **Test connection** button. - To update the **AWS Signature Version 4** authentication method, select the **Update stored AWS credential** button. Within the pop-up window, enter the updated access and secret keys and select **Update stored AWS credential** to save the changes. To test the connection, select the **Test connection** button. -### Delete a data source connection +## Delete a data source connection -Tondelete the data source connection, select the {::nomarkdown}delete icon{:/} icon. +To delete the data source connection, select the {::nomarkdown}delete icon{:/} icon. -## Creating an index pattern +## Create an index pattern After creating a data source connection, the next step is to create an index pattern for that data source. For more information and a tutorial on index patterns, refer to [Index patterns]({{site.url}}{{site.baseurl}}/dashboards/management/index-patterns/). From 830925000ea9e44b6358c06ea6dcaa7b4ea82f68 Mon Sep 17 00:00:00 2001 From: Melissa Vagi Date: Thu, 29 Aug 2024 12:07:23 -0600 Subject: [PATCH 014/111] Add metadata fields for mappings (content gap initiative) (#6933) * Add mapping as part of content gap initiative Signed-off-by: Melissa Vagi * Update mapping as part of content gap initiative Signed-off-by: Melissa Vagi * Add content to address feedback Signed-off-by: Melissa Vagi * Add delete mapping section Signed-off-by: Melissa Vagi * Update index.md Signed-off-by: Melissa Vagi Signed-off-by: Melissa Vagi * Add metadata fields index page Signed-off-by: Melissa Vagi * Add setting descriptions Signed-off-by: Melissa Vagi * Delete remove mappings type section Signed-off-by: Melissa Vagi * Add individual metadata field docs Signed-off-by: Melissa Vagi * Added documentation for field names and ignored Signed-off-by: Melissa Vagi * Add id field doc Signed-off-by: Melissa Vagi * Add field docs Signed-off-by: Melissa Vagi * Add new docs Signed-off-by: Melissa Vagi * Add default and allowed values Signed-off-by: Melissa Vagi * Add default and allowed values Signed-off-by: Melissa Vagi * Update field-names.md Signed-off-by: Melissa Vagi Signed-off-by: Melissa Vagi * Update index-metadata.md Signed-off-by: Melissa Vagi Signed-off-by: Melissa Vagi * Update index-metadata.md Signed-off-by: Melissa Vagi Signed-off-by: Melissa Vagi * Update meta.md Signed-off-by: Melissa Vagi Signed-off-by: Melissa Vagi * Update meta.md Signed-off-by: Melissa Vagi Signed-off-by: Melissa Vagi * Update index-metadata.md Signed-off-by: Melissa Vagi Signed-off-by: Melissa Vagi * Update meta.md Signed-off-by: Melissa Vagi Signed-off-by: Melissa Vagi * Update index-metadata.md Signed-off-by: Melissa Vagi Signed-off-by: Melissa Vagi * Update routing.md Signed-off-by: Melissa Vagi Signed-off-by: Melissa Vagi * Update routing.md Signed-off-by: Melissa Vagi Signed-off-by: Melissa Vagi * Update source.md Signed-off-by: Melissa Vagi Signed-off-by: Melissa Vagi * Update index.md Signed-off-by: Melissa Vagi Signed-off-by: Melissa Vagi * Update _field-types/index.md Signed-off-by: Melissa Vagi * Update _field-types/metadata-fields/id.md Signed-off-by: Melissa Vagi * Add dynamic templates section and examples Signed-off-by: Melissa Vagi * Add dynamic templates code snippet Signed-off-by: Melissa Vagi * Update _field-types/metadata-fields/source.md Co-authored-by: Ralph Ursprung <39383228+rursprung@users.noreply.github.com> Signed-off-by: Melissa Vagi * Update _field-types/index.md Signed-off-by: Melissa Vagi * Update _field-types/index.md Signed-off-by: Melissa Vagi * Update _field-types/index.md Signed-off-by: Melissa Vagi * Update _field-types/index.md Signed-off-by: Melissa Vagi * Update _field-types/index.md Signed-off-by: Melissa Vagi * Update _field-types/index.md Signed-off-by: Melissa Vagi * Update _field-types/metadata-fields/field-names.md Signed-off-by: Melissa Vagi * Update field-names.md Signed-off-by: Melissa Vagi Signed-off-by: Melissa Vagi * Update _field-types/metadata-fields/id.md Signed-off-by: Melissa Vagi * Update _field-types/index.md Signed-off-by: Melissa Vagi * Update _field-types/index.md Signed-off-by: Melissa Vagi * Writing in progress Signed-off-by: Melissa Vagi * Update _field-types/metadata-fields/id.md Signed-off-by: Melissa Vagi * Update _field-types/metadata-fields/field-names.md Signed-off-by: Melissa Vagi * Update _field-types/metadata-fields/field-names.md Signed-off-by: Melissa Vagi * Update _field-types/metadata-fields/field-names.md Signed-off-by: Melissa Vagi * Update _field-types/metadata-fields/index.md Signed-off-by: Melissa Vagi * Update ignored.md Signed-off-by: Melissa Vagi Signed-off-by: Melissa Vagi * Update index.md address final tech review feedback Signed-off-by: Melissa Vagi * Update index.md Signed-off-by: Melissa Vagi * Update _field-types/mappings-use-cases.md Signed-off-by: Melissa Vagi * Update mappings-use-cases.md address tech review comments Signed-off-by: Melissa Vagi * Update field-names.md Signed-off-by: Melissa Vagi * Update id.md Signed-off-by: Melissa Vagi * Update ignored.md Signed-off-by: Melissa Vagi * Update index-metadata.md Signed-off-by: Melissa Vagi * Update index.md Signed-off-by: Melissa Vagi * Update meta.md Signed-off-by: Melissa Vagi * Update _field-types/metadata-fields/routing.md Signed-off-by: Melissa Vagi * Update _field-types/metadata-fields/routing.md Co-authored-by: Naarcha-AWS <97990722+Naarcha-AWS@users.noreply.github.com> Signed-off-by: Melissa Vagi * Update _field-types/metadata-fields/meta.md Co-authored-by: Naarcha-AWS <97990722+Naarcha-AWS@users.noreply.github.com> Signed-off-by: Melissa Vagi * Update _field-types/metadata-fields/source.md Co-authored-by: Naarcha-AWS <97990722+Naarcha-AWS@users.noreply.github.com> Signed-off-by: Melissa Vagi * Update _field-types/index.md Co-authored-by: Naarcha-AWS <97990722+Naarcha-AWS@users.noreply.github.com> Signed-off-by: Melissa Vagi * Update _field-types/metadata-fields/field-names.md Co-authored-by: Naarcha-AWS <97990722+Naarcha-AWS@users.noreply.github.com> Signed-off-by: Melissa Vagi * Update source.md Signed-off-by: Melissa Vagi * Update _field-types/metadata-fields/index-metadata.md Co-authored-by: Naarcha-AWS <97990722+Naarcha-AWS@users.noreply.github.com> Signed-off-by: Melissa Vagi * Update _field-types/metadata-fields/index-metadata.md Co-authored-by: Naarcha-AWS <97990722+Naarcha-AWS@users.noreply.github.com> Signed-off-by: Melissa Vagi * Update _field-types/metadata-fields/ignored.md Co-authored-by: Naarcha-AWS <97990722+Naarcha-AWS@users.noreply.github.com> Signed-off-by: Melissa Vagi * Update _field-types/metadata-fields/ignored.md Co-authored-by: Naarcha-AWS <97990722+Naarcha-AWS@users.noreply.github.com> Signed-off-by: Melissa Vagi * Update _field-types/metadata-fields/ignored.md Co-authored-by: Naarcha-AWS <97990722+Naarcha-AWS@users.noreply.github.com> Signed-off-by: Melissa Vagi * Update _field-types/metadata-fields/id.md Co-authored-by: Naarcha-AWS <97990722+Naarcha-AWS@users.noreply.github.com> Signed-off-by: Melissa Vagi * Update id.md Signed-off-by: Melissa Vagi * Update _field-types/index.md Co-authored-by: Naarcha-AWS <97990722+Naarcha-AWS@users.noreply.github.com> Signed-off-by: Melissa Vagi * Update _field-types/index.md Co-authored-by: Naarcha-AWS <97990722+Naarcha-AWS@users.noreply.github.com> Signed-off-by: Melissa Vagi * Update _field-types/index.md Co-authored-by: Naarcha-AWS <97990722+Naarcha-AWS@users.noreply.github.com> Signed-off-by: Melissa Vagi * Update _field-types/metadata-fields/index.md Co-authored-by: Naarcha-AWS <97990722+Naarcha-AWS@users.noreply.github.com> Signed-off-by: Melissa Vagi * Update _field-types/metadata-fields/id.md Co-authored-by: Naarcha-AWS <97990722+Naarcha-AWS@users.noreply.github.com> Signed-off-by: Melissa Vagi * Update field-names.md Signed-off-by: Melissa Vagi * Address doc review comments Signed-off-by: Melissa Vagi * Address doc review comments Signed-off-by: Melissa Vagi * Update _field-types/index.md Co-authored-by: Nathan Bower Signed-off-by: Melissa Vagi * Update _field-types/index.md Co-authored-by: Nathan Bower Signed-off-by: Melissa Vagi * Update _field-types/index.md Co-authored-by: Nathan Bower Signed-off-by: Melissa Vagi * Update _field-types/index.md Co-authored-by: Nathan Bower Signed-off-by: Melissa Vagi * Update _field-types/index.md Co-authored-by: Nathan Bower Signed-off-by: Melissa Vagi * Update _field-types/index.md Co-authored-by: Nathan Bower Signed-off-by: Melissa Vagi * Update _field-types/index.md Signed-off-by: Melissa Vagi * Update _field-types/index.md Co-authored-by: Nathan Bower Signed-off-by: Melissa Vagi * Update _field-types/index.md Co-authored-by: Nathan Bower Signed-off-by: Melissa Vagi * Update _field-types/index.md Co-authored-by: Nathan Bower Signed-off-by: Melissa Vagi * Update _field-types/index.md Co-authored-by: Nathan Bower Signed-off-by: Melissa Vagi * Update _field-types/index.md Co-authored-by: Nathan Bower Signed-off-by: Melissa Vagi * Update _field-types/index.md Co-authored-by: Nathan Bower Signed-off-by: Melissa Vagi * Update _field-types/index.md Co-authored-by: Nathan Bower Signed-off-by: Melissa Vagi * Update _field-types/index.md Co-authored-by: Nathan Bower Signed-off-by: Melissa Vagi * Update _field-types/index.md Co-authored-by: Nathan Bower Signed-off-by: Melissa Vagi * Update _field-types/index.md Co-authored-by: Nathan Bower Signed-off-by: Melissa Vagi * Update _field-types/index.md Co-authored-by: Nathan Bower Signed-off-by: Melissa Vagi * Update _field-types/index.md Co-authored-by: Nathan Bower Signed-off-by: Melissa Vagi * Update _field-types/index.md Co-authored-by: Nathan Bower Signed-off-by: Melissa Vagi * Update _field-types/index.md Signed-off-by: Melissa Vagi * Update _field-types/index.md Co-authored-by: Nathan Bower Signed-off-by: Melissa Vagi * Update _field-types/mappings-use-cases.md Co-authored-by: Nathan Bower Signed-off-by: Melissa Vagi * Update _field-types/mappings-use-cases.md Co-authored-by: Nathan Bower Signed-off-by: Melissa Vagi * Update _field-types/mappings-use-cases.md Co-authored-by: Nathan Bower Signed-off-by: Melissa Vagi * Update _field-types/mappings-use-cases.md Co-authored-by: Nathan Bower Signed-off-by: Melissa Vagi * Update _field-types/mappings-use-cases.md Co-authored-by: Nathan Bower Signed-off-by: Melissa Vagi * Update _field-types/metadata-fields/field-names.md Co-authored-by: Nathan Bower Signed-off-by: Melissa Vagi * Update _field-types/metadata-fields/id.md Co-authored-by: Nathan Bower Signed-off-by: Melissa Vagi * Update _field-types/metadata-fields/id.md Co-authored-by: Nathan Bower Signed-off-by: Melissa Vagi * Update _field-types/metadata-fields/field-names.md Signed-off-by: Melissa Vagi * Update _field-types/metadata-fields/id.md Signed-off-by: Melissa Vagi * Update _field-types/metadata-fields/id.md Co-authored-by: Nathan Bower Signed-off-by: Melissa Vagi * Update _field-types/metadata-fields/id.md Co-authored-by: Nathan Bower Signed-off-by: Melissa Vagi * Update _field-types/metadata-fields/id.md Co-authored-by: Nathan Bower Signed-off-by: Melissa Vagi * Update _field-types/metadata-fields/id.md Co-authored-by: Nathan Bower Signed-off-by: Melissa Vagi * Update _field-types/metadata-fields/ignored.md Co-authored-by: Nathan Bower Signed-off-by: Melissa Vagi * Update _field-types/metadata-fields/ignored.md Co-authored-by: Nathan Bower Signed-off-by: Melissa Vagi * Update _field-types/metadata-fields/ignored.md Co-authored-by: Nathan Bower Signed-off-by: Melissa Vagi * Update _field-types/metadata-fields/ignored.md Co-authored-by: Nathan Bower Signed-off-by: Melissa Vagi * Update _field-types/metadata-fields/ignored.md Co-authored-by: Nathan Bower Signed-off-by: Melissa Vagi * Update _field-types/metadata-fields/ignored.md Co-authored-by: Nathan Bower Signed-off-by: Melissa Vagi * Update _field-types/metadata-fields/index-metadata.md Co-authored-by: Nathan Bower Signed-off-by: Melissa Vagi * Update _field-types/metadata-fields/index-metadata.md Co-authored-by: Nathan Bower Signed-off-by: Melissa Vagi * Update _field-types/metadata-fields/index-metadata.md Co-authored-by: Nathan Bower Signed-off-by: Melissa Vagi * Update _field-types/metadata-fields/index-metadata.md Co-authored-by: Nathan Bower Signed-off-by: Melissa Vagi * Update _field-types/metadata-fields/index-metadata.md Co-authored-by: Nathan Bower Signed-off-by: Melissa Vagi * Update _field-types/metadata-fields/index-metadata.md Co-authored-by: Nathan Bower Signed-off-by: Melissa Vagi * Update _field-types/metadata-fields/index.md Co-authored-by: Nathan Bower Signed-off-by: Melissa Vagi * Update _field-types/metadata-fields/index.md Co-authored-by: Nathan Bower Signed-off-by: Melissa Vagi * Update _field-types/metadata-fields/index.md Co-authored-by: Nathan Bower Signed-off-by: Melissa Vagi * Update _field-types/metadata-fields/index.md Co-authored-by: Nathan Bower Signed-off-by: Melissa Vagi * Update _field-types/metadata-fields/index.md Co-authored-by: Nathan Bower Signed-off-by: Melissa Vagi * Update _field-types/metadata-fields/index.md Co-authored-by: Nathan Bower Signed-off-by: Melissa Vagi * Update _field-types/metadata-fields/index.md Co-authored-by: Nathan Bower Signed-off-by: Melissa Vagi * Update _field-types/metadata-fields/index.md Co-authored-by: Nathan Bower Signed-off-by: Melissa Vagi * Update _field-types/metadata-fields/meta.md Co-authored-by: Nathan Bower Signed-off-by: Melissa Vagi * Update _field-types/metadata-fields/meta.md Co-authored-by: Nathan Bower Signed-off-by: Melissa Vagi * Update _field-types/metadata-fields/meta.md Co-authored-by: Nathan Bower Signed-off-by: Melissa Vagi * Update _field-types/metadata-fields/meta.md Signed-off-by: Melissa Vagi * Update _field-types/metadata-fields/meta.md Co-authored-by: Nathan Bower Signed-off-by: Melissa Vagi * Update _field-types/metadata-fields/meta.md Signed-off-by: Melissa Vagi * Update _field-types/metadata-fields/meta.md Signed-off-by: Melissa Vagi * Update _field-types/metadata-fields/meta.md Co-authored-by: Nathan Bower Signed-off-by: Melissa Vagi * Update _field-types/metadata-fields/routing.md Co-authored-by: Nathan Bower Signed-off-by: Melissa Vagi * Update _field-types/metadata-fields/routing.md Co-authored-by: Nathan Bower Signed-off-by: Melissa Vagi * Update _field-types/metadata-fields/routing.md Co-authored-by: Nathan Bower Signed-off-by: Melissa Vagi * Update _field-types/metadata-fields/routing.md Co-authored-by: Nathan Bower Signed-off-by: Melissa Vagi * Update _field-types/metadata-fields/routing.md Co-authored-by: Nathan Bower Signed-off-by: Melissa Vagi * Update _field-types/metadata-fields/routing.md Co-authored-by: Nathan Bower Signed-off-by: Melissa Vagi * Update _field-types/metadata-fields/routing.md Co-authored-by: Nathan Bower Signed-off-by: Melissa Vagi * Update _field-types/metadata-fields/routing.md Signed-off-by: Melissa Vagi * Update _field-types/metadata-fields/routing.md Signed-off-by: Melissa Vagi * Update _field-types/metadata-fields/routing.md Co-authored-by: Nathan Bower Signed-off-by: Melissa Vagi * Update _field-types/metadata-fields/routing.md Co-authored-by: Nathan Bower Signed-off-by: Melissa Vagi * Update _field-types/metadata-fields/routing.md Signed-off-by: Melissa Vagi * Update _field-types/metadata-fields/routing.md Signed-off-by: Melissa Vagi * Update _field-types/metadata-fields/source.md Co-authored-by: Nathan Bower Signed-off-by: Melissa Vagi * Update _field-types/metadata-fields/source.md Co-authored-by: Nathan Bower Signed-off-by: Melissa Vagi * Update _field-types/metadata-fields/source.md Co-authored-by: Nathan Bower Signed-off-by: Melissa Vagi * Update _field-types/metadata-fields/routing.md Signed-off-by: Melissa Vagi * Update _field-types/metadata-fields/routing.md Signed-off-by: Melissa Vagi * Update routing.md Signed-off-by: Melissa Vagi * Update _field-types/metadata-fields/routing.md Signed-off-by: Melissa Vagi --------- Signed-off-by: Melissa Vagi Co-authored-by: Ralph Ursprung <39383228+rursprung@users.noreply.github.com> Co-authored-by: Naarcha-AWS <97990722+Naarcha-AWS@users.noreply.github.com> Co-authored-by: Nathan Bower --- _field-types/index.md | 182 ++++++++---------- _field-types/mappings-use-cases.md | 122 ++++++++++++ _field-types/metadata-fields/field-names.md | 43 +++++ _field-types/metadata-fields/id.md | 86 +++++++++ _field-types/metadata-fields/ignored.md | 147 ++++++++++++++ .../metadata-fields/index-metadata.md | 86 +++++++++ _field-types/metadata-fields/index.md | 21 ++ _field-types/metadata-fields/meta.md | 87 +++++++++ _field-types/metadata-fields/routing.md | 92 +++++++++ _field-types/metadata-fields/source.md | 54 ++++++ 10 files changed, 823 insertions(+), 97 deletions(-) create mode 100644 _field-types/mappings-use-cases.md create mode 100644 _field-types/metadata-fields/field-names.md create mode 100644 _field-types/metadata-fields/id.md create mode 100644 _field-types/metadata-fields/ignored.md create mode 100644 _field-types/metadata-fields/index-metadata.md create mode 100644 _field-types/metadata-fields/index.md create mode 100644 _field-types/metadata-fields/meta.md create mode 100644 _field-types/metadata-fields/routing.md create mode 100644 _field-types/metadata-fields/source.md diff --git a/_field-types/index.md b/_field-types/index.md index 7a7e816ada..e9250f409d 100644 --- a/_field-types/index.md +++ b/_field-types/index.md @@ -12,43 +12,77 @@ redirect_from: # Mappings and field types -You can define how documents and their fields are stored and indexed by creating a _mapping_. The mapping specifies the list of fields for a document. Every field in the document has a _field type_, which defines the type of data the field contains. For example, you may want to specify that the `year` field should be of type `date`. To learn more, see [Supported field types]({{site.url}}{{site.baseurl}}/field-types/supported-field-types/index/). +Mappings tell OpenSearch how to store and index your documents and their fields. You can specify the data type for each field (for example, `year` as `date`) to make storage and querying more efficient. -If you're just starting to build out your cluster and data, you may not know exactly how your data should be stored. In those cases, you can use dynamic mappings, which tell OpenSearch to dynamically add data and its fields. However, if you know exactly what types your data falls under and want to enforce that standard, then you can use explicit mappings. +While [dynamic mappings](#dynamic-mapping) automatically add new data and fields, using explicit mappings is recommended. Explicit mappings let you define the exact structure and data types upfront. This helps to maintain data consistency and optimize performance, especially for large datasets or high-volume indexing operations. -For example, if you want to indicate that `year` should be of type `text` instead of an `integer`, and `age` should be an `integer`, you can do so with explicit mappings. By using dynamic mapping, OpenSearch might interpret both `year` and `age` as integers. +For example, with explicit mappings, you can ensure that `year` is treated as text and `age` as an integer instead of both being interpreted as integers by dynamic mapping. -This section provides an example for how to create an index mapping and how to add a document to it that will get ip_range validated. - -#### Table of contents -1. TOC -{:toc} - - ---- ## Dynamic mapping When you index a document, OpenSearch adds fields automatically with dynamic mapping. You can also explicitly add fields to an index mapping. -#### Dynamic mapping types +### Dynamic mapping types Type | Description :--- | :--- -null | A `null` field can't be indexed or searched. When a field is set to null, OpenSearch behaves as if that field has no values. -boolean | OpenSearch accepts `true` and `false` as boolean values. An empty string is equal to `false.` -float | A single-precision 32-bit floating point number. -double | A double-precision 64-bit floating point number. -integer | A signed 32-bit number. -object | Objects are standard JSON objects, which can have fields and mappings of their own. For example, a `movies` object can have additional properties such as `title`, `year`, and `director`. -array | Arrays in OpenSearch can only store values of one type, such as an array of just integers or strings. Empty arrays are treated as though they are fields with no values. -text | A string sequence of characters that represent full-text values. -keyword | A string sequence of structured characters, such as an email address or ZIP code. +`null` | A `null` field can't be indexed or searched. When a field is set to null, OpenSearch behaves as if the field has no value. +`boolean` | OpenSearch accepts `true` and `false` as Boolean values. An empty string is equal to `false.` +`float` | A single-precision, 32-bit floating-point number. +`double` | A double-precision, 64-bit floating-point number. +`integer` | A signed 32-bit number. +`object` | Objects are standard JSON objects, which can have fields and mappings of their own. For example, a `movies` object can have additional properties such as `title`, `year`, and `director`. +`array` | OpenSearch does not have a specific array data type. Arrays are represented as a set of values of the same data type (for example, integers or strings) associated with a field. When indexing, you can pass multiple values for a field, and OpenSearch will treat it as an array. Empty arrays are valid and recognized as array fields with zero elements---not as fields with no values. OpenSearch supports querying and filtering arrays, including checking for values, range queries, and array operations like concatenation and intersection. Nested arrays, which may contain complex objects or other arrays, can also be used for advanced data modeling. +`text` | A string sequence of characters that represent full-text values. +`keyword` | A string sequence of structured characters, such as an email address or ZIP code. date detection string | Enabled by default, if new string fields match a date's format, then the string is processed as a `date` field. For example, `date: "2012/03/11"` is processed as a date. numeric detection string | If disabled, OpenSearch may automatically process numeric values as strings when they should be processed as numbers. When enabled, OpenSearch can process strings into `long`, `integer`, `short`, `byte`, `double`, `float`, `half_float`, `scaled_float`, and `unsigned_long`. Default is disabled. +### Dynamic templates + +Dynamic templates are used to define custom mappings for dynamically added fields based on the data type, field name, or field path. They allow you to define a flexible schema for your data that can automatically adapt to changes in the structure or format of the input data. + +You can use the following syntax to define a dynamic mapping template: + +```json +PUT index +{ + "mappings": { + "dynamic_templates": [ + { + "fields": { + "mapping": { + "type": "short" + }, + "match_mapping_type": "string", + "path_match": "status*" + } + } + ] + } +} +``` +{% include copy-curl.html %} + +This mapping configuration dynamically maps any field with a name starting with `status` (for example, `status_code`) to the `short` data type if the initial value provided during indexing is a string. + +### Dynamic mapping parameters + +The `dynamic_templates` support the following parameters for matching conditions and mapping rules. The default value is `null`. + +Parameter | Description | +----------|-------------| +`match_mapping_type` | Specifies the JSON data type (for example, string, long, double, object, binary, Boolean, date) that triggers the mapping. +`match` | A regular expression used to match field names and apply the mapping. +`unmatch` | A regular expression used to exclude field names from the mapping. +`match_pattern` | Determines the pattern matching behavior, either `regex` or `simple`. Default is `simple`. +`path_match` | Allows you to match nested field paths using a regular expression. +`path_unmatch` | Excludes nested field paths from the mapping using a regular expression. +`mapping` | The mapping configuration to apply. + ## Explicit mapping -If you know exactly what your field data types need to be, you can specify them in your request body when creating your index. +If you know exactly which field data types you need to use, then you can specify them in your request body when creating your index, as shown in the following example request: ```json PUT sample-index1 @@ -62,8 +96,9 @@ PUT sample-index1 } } ``` +{% include copy-curl.html %} -### Response +#### Response ```json { "acknowledged": true, @@ -71,8 +106,9 @@ PUT sample-index1 "index": "sample-index1" } ``` +{% include copy-curl.html %} -To add mappings to an existing index or data stream, you can send a request to the `_mapping` endpoint using the `PUT` or `POST` HTTP method: +To add mappings to an existing index or data stream, you can send a request to the `_mapping` endpoint using the `PUT` or `POST` HTTP method, as shown in the following example request: ```json POST sample-index1/_mapping @@ -84,84 +120,29 @@ POST sample-index1/_mapping } } ``` +{% include copy-curl.html %} You cannot change the mapping of an existing field, you can only modify the field's mapping parameters. {: .note} ---- -## Mapping example usage +## Mapping parameters -The following example shows how to create a mapping to specify that OpenSearch should ignore any documents with malformed IP addresses that do not conform to the [`ip`]({{site.url}}{{site.baseurl}}/opensearch/supported-field-types/ip/) data type. You accomplish this by setting the `ignore_malformed` parameter to `true`. +Mapping parameters are used to configure the behavior of index fields. See [Mappings and field types]({{site.url}}{{site.baseurl}}/field-types/) for more information. -### Create an index with an `ip` mapping +## Mapping limit settings -To create an index, use a PUT request: +OpenSearch has certain mapping limits and settings, such as the settings listed in the following table. Settings can be configured based on your requirements. -```json -PUT /test-index -{ - "mappings" : { - "properties" : { - "ip_address" : { - "type" : "ip", - "ignore_malformed": true - } - } - } -} -``` - -You can add a document that has a malformed IP address to your index: - -```json -PUT /test-index/_doc/1 -{ - "ip_address" : "malformed ip address" -} -``` - -This indexed IP address does not throw an error because `ignore_malformed` is set to true. - -You can query the index using the following request: - -```json -GET /test-index/_search -``` +| Setting | Default value | Allowed value | Type | Description | +|-|-|-|-|-| +| `index.mapping.nested_fields.limit` | 50 | [0,) | Dynamic | Limits the maximum number of nested fields that can be defined in an index mapping. | +| `index.mapping.nested_objects.limit` | 10,000 | [0,) | Dynamic | Limits the maximum number of nested objects that can be created in a single document. | +| `index.mapping.total_fields.limit` | 1,000 | [0,) | Dynamic | Limits the maximum number of fields that can be defined in an index mapping. | +| `index.mapping.depth.limit` | 20 | [1,100] | Dynamic | Limits the maximum depth of nested objects and nested fields that can be defined in an index mapping. | +| `index.mapping.field_name_length.limit` | 50,000 | [1,50000] | Dynamic | Limits the maximum length of field names that can be defined in an index mapping. | +| `index.mapper.dynamic` | true | {true,false} | Dynamic | Determines whether new fields should be dynamically added to a mapping. | -The response shows that the `ip_address` field is ignored in the indexed document: - -```json -{ - "took": 14, - "timed_out": false, - "_shards": { - "total": 1, - "successful": 1, - "skipped": 0, - "failed": 0 - }, - "hits": { - "total": { - "value": 1, - "relation": "eq" - }, - "max_score": 1, - "hits": [ - { - "_index": "test-index", - "_id": "1", - "_score": 1, - "_ignored": [ - "ip_address" - ], - "_source": { - "ip_address": "malformed ip address" - } - } - ] - } -} -``` +--- ## Get a mapping @@ -170,14 +151,16 @@ To get all mappings for one or more indexes, use the following request: ```json GET /_mapping ``` +{% include copy-curl.html %} -In the above request, `` may be an index name or a comma-separated list of index names. +In the previous request, `` may be an index name or a comma-separated list of index names. To get all mappings for all indexes, use the following request: ```json GET _mapping ``` +{% include copy-curl.html %} To get a mapping for a specific field, provide the index name and the field name: @@ -185,14 +168,14 @@ To get a mapping for a specific field, provide the index name and the field name GET _mapping/field/ GET //_mapping/field/ ``` +{% include copy-curl.html %} -Both `` and `` can be specified as one value or a comma-separated list. - -For example, the following request retrieves the mapping for the `year` and `age` fields in `sample-index1`: +Both `` and `` can be specified as either one value or a comma-separated list. For example, the following request retrieves the mapping for the `year` and `age` fields in `sample-index1`: ```json GET sample-index1/_mapping/field/year,age ``` +{% include copy-curl.html %} The response contains the specified fields: @@ -220,3 +203,8 @@ The response contains the specified fields: } } ``` +{% include copy-curl.html %} + +## Mappings use cases + +See [Mappings use cases]({{site.url}}{{site.baseurl}}/field-types/mappings-use-cases/) for use case examples, including examples of mapping string fields and ignoring malformed IP addresses. diff --git a/_field-types/mappings-use-cases.md b/_field-types/mappings-use-cases.md new file mode 100644 index 0000000000..835e030bab --- /dev/null +++ b/_field-types/mappings-use-cases.md @@ -0,0 +1,122 @@ +--- +layout: default +title: Mappings use cases +parent: Mappings and fields types +nav_order: 5 +nav_exclude: true +--- + +# Mappings use cases + +Mappings provide control over how data is indexed and queried, enabling optimized performance and efficient storage for a range of use cases. + +--- + +## Example: Ignoring malformed IP addresses + +The following example shows you how to create a mapping specifying that OpenSearch should ignore any documents containing malformed IP addresses that do not conform to the [`ip`]({{site.url}}{{site.baseurl}}/opensearch/supported-field-types/ip/) data type. You can accomplish this by setting the `ignore_malformed` parameter to `true`. + +### Create an index with an `ip` mapping + +To create an index with an `ip` mapping, use a PUT request: + +```json +PUT /test-index +{ + "mappings" : { + "properties" : { + "ip_address" : { + "type" : "ip", + "ignore_malformed": true + } + } + } +} +``` +{% include copy-curl.html %} + +Then add a document with a malformed IP address: + +```json +PUT /test-index/_doc/1 +{ + "ip_address" : "malformed ip address" +} +``` +{% include copy-curl.html %} + +When you query the index, the `ip_address` field will be ignored. You can query the index using the following request: + +```json +GET /test-index/_search +``` +{% include copy-curl.html %} + +#### Response + +```json +{ + "took": 14, + "timed_out": false, + "_shards": { + "total": 1, + "successful": 1, + "skipped": 0, + "failed": 0 + }, + "hits": { + "total": { + "value": 1, + "relation": "eq" + }, + "max_score": 1, + "hits": [ + { + "_index": "test-index", + "_id": "1", + "_score": 1, + "_ignored": [ + "ip_address" + ], + "_source": { + "ip_address": "malformed ip address" + } + } + ] + } +} +``` +{% include copy-curl.html %} + +--- + +## Mapping string fields to `text` and `keyword` types + +To create an index named `movies1` with a dynamic template that maps all string fields to both the `text` and `keyword` types, you can use the following request: + +```json +PUT movies1 +{ + "mappings": { + "dynamic_templates": [ + { + "strings": { + "match_mapping_type": "string", + "mapping": { + "type": "text", + "fields": { + "keyword": { + "type": "keyword", + "ignore_above": 256 + } + } + } + } + } + ] + } +} +``` +{% include copy-curl.html %} + +This dynamic template ensures that any string fields in your documents will be indexed as both a full-text `text` type and a `keyword` type. diff --git a/_field-types/metadata-fields/field-names.md b/_field-types/metadata-fields/field-names.md new file mode 100644 index 0000000000..b17e94fbb4 --- /dev/null +++ b/_field-types/metadata-fields/field-names.md @@ -0,0 +1,43 @@ +--- +layout: default +title: Field names +nav_order: 10 +parent: Metadata fields +--- + +# Field names + +The `_field_names` field indexes field names that contain non-null values. This enables the use of the `exists` query, which can identify documents that either have or do not have non-null values for a specified field. + +However, `_field_names` only indexes field names when both `doc_values` and `norms` are disabled. If either `doc_values` or `norms` are enabled, then the `exists` query still functions but will not rely on the `_field_names` field. + +## Mapping example + +```json +{ + "mappings": { + "_field_names": { + "enabled": "true" + }, + "properties": { + }, + "title": { + "type": "text", + "doc_values": false, + "norms": false + }, + "description": { + "type": "text", + "doc_values": true, + "norms": false + }, + "price": { + "type": "float", + "doc_values": false, + "norms": true + } + } + } +} +``` +{% include copy-curl.html %} diff --git a/_field-types/metadata-fields/id.md b/_field-types/metadata-fields/id.md new file mode 100644 index 0000000000..f66f4b8e13 --- /dev/null +++ b/_field-types/metadata-fields/id.md @@ -0,0 +1,86 @@ +--- +layout: default +title: ID +nav_order: 20 +parent: Metadata fields +--- + +# ID + +Each document in OpenSearch has a unique `_id` field. This field is indexed, allowing you to retrieve documents using the GET API or the [`ids` query]({{site.url}}{{site.baseurl}}/query-dsl/term/ids/). + +If you do not provide an `_id` value, then OpenSearch automatically generates one for the document. +{: .note} + +The following example request creates an index named `test-index1` and adds two documents with different `_id` values: + +```json +PUT test-index1/_doc/1 +{ + "text": "Document with ID 1" +} + +PUT test-index1/_doc/2?refresh=true +{ + "text": "Document with ID 2" +} +``` +{% include copy-curl.html %} + +You can then query the documents using the `_id` field, as shown in the following example request: + +```json +GET test-index1/_search +{ + "query": { + "terms": { + "_id": ["1", "2"] + } + } +} +``` +{% include copy-curl.html %} + +The response returns both documents with `_id` values of `1` and `2`: + +```json +{ + "took": 10, + "timed_out": false, + "_shards": { + "total": 1, + "successful": 1, + "skipped": 0, + "failed": 0 + }, + "hits": { + "total": { + "value": 2, + "relation": "eq" + }, + "max_score": 1, + "hits": [ + { + "_index": "test-index1", + "_id": "1", + "_score": 1, + "_source": { + "text": "Document with ID 1" + } + }, + { + "_index": "test-index1", + "_id": "2", + "_score": 1, + "_source": { + "text": "Document with ID 2" + } + } + ] + } +``` +{% include copy-curl.html %} + +## Limitations of the `_id` field + +While the `_id` field can be used in various queries, it is restricted from use in aggregations, sorting, and scripting. If you need to sort or aggregate on the `_id` field, it is recommended to duplicate the `_id` content into another field with `doc_values` enabled. Refer to [IDs query]({{site.url}}{{site.baseurl}}/query-dsl/term/ids/) for an example. diff --git a/_field-types/metadata-fields/ignored.md b/_field-types/metadata-fields/ignored.md new file mode 100644 index 0000000000..e867cfc754 --- /dev/null +++ b/_field-types/metadata-fields/ignored.md @@ -0,0 +1,147 @@ +--- +layout: default +title: Ignored +nav_order: 25 +parent: Metadata fields +--- + +# Ignored + +The `_ignored` field helps you manage issues related to malformed data in your documents. This field is used to index and store field names that were ignored during the indexing process as a result of the `ignore_malformed` setting being enabled in the [index mapping]({{site.url}}{{site.baseurl}}/field-types/). + +The `_ignored` field allows you to search for and identify documents containing fields that were ignored as well as for the specific field names that were ignored. This can be useful for troubleshooting. + +You can query the `_ignored` field using the `term`, `terms`, and `exists` queries, and the results will be included in the search hits. + +The `_ignored` field is only populated when the `ignore_malformed` setting is enabled in your index mapping. If `ignore_malformed` is set to `false` (the default value), then malformed fields will cause the entire document to be rejected, and the `_ignored` field will not be populated. +{: .note} + +The following example request shows you how to use the `_ignored` field: + +```json +GET _search +{ + "query": { + "exists": { + "field": "_ignored" + } + } +} +``` +{% include copy-curl.html %} + +--- + +#### Example indexing request with the `_ignored` field + +The following example request adds a new document to the `test-ignored` index with `ignore_malformed` set to `true` so that no error is thrown during indexing: + +```json +PUT test-ignored +{ + "mappings": { + "properties": { + "title": { + "type": "text" + }, + "length": { + "type": "long", + "ignore_malformed": true + } + } + } +} + +POST test-ignored/_doc +{ + "title": "correct text", + "length": "not a number" +} + +GET test-ignored/_search +{ + "query": { + "exists": { + "field": "_ignored" + } + } +} +``` +{% include copy-curl.html %} + +#### Example reponse + +```json +{ + "took": 42, + "timed_out": false, + "_shards": { + "total": 1, + "successful": 1, + "skipped": 0, + "failed": 0 + }, + "hits": { + "total": { + "value": 1, + "relation": "eq" + }, + "max_score": 1, + "hits": [ + { + "_index": "test-ignored", + "_id": "qcf0wZABpEYH7Rw9OT7F", + "_score": 1, + "_ignored": [ + "length" + ], + "_source": { + "title": "correct text", + "length": "not a number" + } + } + ] + } +} +``` + +--- + +## Ignoring a specified field + +You can use a `term` query to find documents in which a specific field was ignored, as shown in the following example request: + +```json +GET _search +{ + "query": { + "term": { + "_ignored": "created_at" + } + } +} +``` +{% include copy-curl.html %} + +#### Reponse + +```json +{ + "took": 51, + "timed_out": false, + "_shards": { + "total": 45, + "successful": 45, + "skipped": 0, + "failed": 0 + }, + "hits": { + "total": { + "value": 0, + "relation": "eq" + }, + "max_score": null, + "hits": [] + } +} +``` diff --git a/_field-types/metadata-fields/index-metadata.md b/_field-types/metadata-fields/index-metadata.md new file mode 100644 index 0000000000..657f7d62a5 --- /dev/null +++ b/_field-types/metadata-fields/index-metadata.md @@ -0,0 +1,86 @@ +--- +layout: default +title: Index +nav_order: 25 +parent: Metadata fields +--- + +# Index + +When querying across multiple indexes, you may need to filter results based on the index into which a document was indexed. The `index` field matches documents based on their index. + +The following example request creates two indexes, `products` and `customers`, and adds a document to each index: + +```json +PUT products/_doc/1 +{ + "name": "Widget X" +} + +PUT customers/_doc/2 +{ + "name": "John Doe" +} +``` +{% include copy-curl.html %} + +You can then query both indexes and filter the results using the `_index` field, as shown in the following example request: + +```json +GET products,customers/_search +{ + "query": { + "terms": { + "_index": ["products", "customers"] + } + }, + "aggs": { + "index_groups": { + "terms": { + "field": "_index", + "size": 10 + } + } + }, + "sort": [ + { + "_index": { + "order": "desc" + } + } + ], + "script_fields": { + "index_name": { + "script": { + "lang": "painless", + "source": "doc['_index'].value" + } + } + } +} +``` +{% include copy-curl.html %} + +In this example: + +- The `query` section uses a `terms` query to match documents from the `products` and `customers` indexes. +- The `aggs` section performs a `terms` aggregation on the `_index` field, grouping the results by index. +- The `sort` section sorts the results by the `_index` field in ascending order. +- The `script_fields` section adds a new field called `index_name` to the search results containing the `_index` field value for each document. + +## Querying on the `_index` field + +The `_index` field represents the index into which a document was indexed. You can use this field in your queries to filter, aggregate, sort, or retrieve index information for your search results. + +Because the `_index` field is automatically added to every document, you can use it in your queries like any other field. For example, you can use the `terms` query to match documents from multiple indexes. The following example query returns all documents from the `products` and `customers` indexes: + +```json + { + "query": { + "terms": { + "_index": ["products", "customers"] + } + } +} +``` +{% include copy-curl.html %} diff --git a/_field-types/metadata-fields/index.md b/_field-types/metadata-fields/index.md new file mode 100644 index 0000000000..cdc079e1e5 --- /dev/null +++ b/_field-types/metadata-fields/index.md @@ -0,0 +1,21 @@ +--- +layout: default +title: Metadata fields +nav_order: 90 +has_children: true +has_toc: false +--- + +# Metadata fields + +OpenSearch provides built-in metadata fields that allow you to access information about the documents in an index. These fields can be used in your queries as needed. + +Metadata field | Description +:--- | :--- +`_field_names` | The document fields with non-empty or non-null values. +`_ignored` | The document fields that were ignored during the indexing process due to the presence of malformed data, as specified by the `ignore_malformed` setting. +`_id` | The unique identifier assigned to each document. +`_index` | The index in which the document is stored. +`_meta` | Stores custom metadata or additional information specific to the application or use case. +`_routing` | Allows you to specify a custom value that determines the shard assignment for a document in an OpenSearch cluster. +`_source` | Contains the original JSON representation of the document data. diff --git a/_field-types/metadata-fields/meta.md b/_field-types/metadata-fields/meta.md new file mode 100644 index 0000000000..220d58f106 --- /dev/null +++ b/_field-types/metadata-fields/meta.md @@ -0,0 +1,87 @@ +--- +layout: default +title: Meta +nav_order: 30 +parent: Metadata fields +--- + +# Meta + +The `_meta` field is a mapping property that allows you to attach custom metadata to your index mappings. This metadata can be used by your application to store information relevant to your use case, such as versioning, ownership, categorization, or auditing. + +## Usage + +You can define the `_meta` field when creating a new index or updating an existing index's mapping, as shown in the following example request: + +```json +PUT my-index +{ + "mappings": { + "_meta": { + "application": "MyApp", + "version": "1.2.3", + "author": "John Doe" + }, + "properties": { + "title": { + "type": "text" + }, + "description": { + "type": "text" + } + } + } +} + +``` +{% include copy-curl.html %} + +In this example, three custom metadata fields are added: `application`, `version`, and `author`. These fields can be used by your application to store any relevant information about the index, such as the application it belongs to, the application version, or the author of the index. + +You can update the `_meta` field using the [Put Mapping API]({{site.url}}{{site.baseurl}}/api-reference/index-apis/put-mapping/) operation, as shown in the following example request: + +```json +PUT my-index/_mapping +{ + "_meta": { + "application": "MyApp", + "version": "1.3.0", + "author": "Jane Smith" + } +} +``` +{% include copy-curl.html %} + +## Retrieving `meta` information + +You can retrieve the `_meta` information for an index using the [Get Mapping API]({{site.url}}{{site.baseurl}}/field-types/#get-a-mapping) operation, as shown in the following example request: + +```json +GET my-index/_mapping +``` +{% include copy-curl.html %} + +The response returns the full index mapping, including the `_meta` field: + +```json +{ + "my-index": { + "mappings": { + "_meta": { + "application": "MyApp", + "version": "1.3.0", + "author": "Jane Smith" + }, + "properties": { + "description": { + "type": "text" + }, + "title": { + "type": "text" + } + } + } + } +} +``` +{% include copy-curl.html %} diff --git a/_field-types/metadata-fields/routing.md b/_field-types/metadata-fields/routing.md new file mode 100644 index 0000000000..9064e20c49 --- /dev/null +++ b/_field-types/metadata-fields/routing.md @@ -0,0 +1,92 @@ +--- +layout: default +title: Routing +nav_order: 35 +parent: Metadata fields +--- + +# Routing + +OpenSearch uses a hashing algorithm to route documents to specific shards in an index. By default, the document's `_id` field is used as the routing value, but you can also specify a custom routing value for each document. + +## Default routing + +The following is the default OpenSearch routing formula. The `_routing` value is the document's `_id`. + +```json +shard_num = hash(_routing) % num_primary_shards +``` + +## Custom routing + +You can specify a custom routing value when indexing a document, as shown in the following example request: + +```json +PUT sample-index1/_doc/1?routing=JohnDoe1 +{ + "title": "This is a document" +} +``` +{% include copy-curl.html %} + +In this example, the document is routed using the value `JohnDoe1` instead of the default `_id`. + +You must provide the same routing value when retrieving, deleting, or updating the document, as shown in the following example request: + +```json +GET sample-index1/_doc/1?routing=JohnDoe1 +``` +{% include copy-curl.html %} + +## Querying by routing + +You can query documents based on their routing value by using the `_routing` field, as shown in the following example. This query only searches the shard(s) associated with the `JohnDoe1` routing value: + +```json +GET sample-index1/_search +{ + "query": { + "terms": { + "_routing": [ "JohnDoe1" ] + } + } +} +``` +{% include copy-curl.html %} + +## Required routing + +You can make custom routing required for all CRUD operations on an index, as shown in the following example request. If you try to index a document without providing a routing value, OpenSearch will throw an exception. + +```json +PUT sample-index2 +{ + "mappings": { + "_routing": { + "required": true + } + } +} +``` +{% include copy-curl.html %} + +## Routing to specific shards + +You can configure an index to route custom values to a subset of shards rather than a single shard. This is done by setting `index.routing_partition_size` at the time of index creation. The formula for calculating the shard is `shard_num = (hash(_routing) + hash(_id)) % routing_partition_size) % num_primary_shards`. + +The following example request routes documents to one of four shards in the index: + +```json +PUT sample-index3 +{ + "settings": { + "index.routing_partition_size": 4 + }, + "mappings": { + "_routing": { + "required": true + } + } +} +``` +{% include copy-curl.html %} diff --git a/_field-types/metadata-fields/source.md b/_field-types/metadata-fields/source.md new file mode 100644 index 0000000000..c9e714f43c --- /dev/null +++ b/_field-types/metadata-fields/source.md @@ -0,0 +1,54 @@ +--- +layout: default +title: Source +nav_order: 40 +parent: Metadata fields +--- + +# Source + +The `_source` field contains the original JSON document body that was indexed. While this field is not searchable, it is stored so that the full document can be returned when executing fetch requests, such as `get` and `search`. + +## Disabling the field + +You can disable the `_source` field by setting the `enabled` parameter to `false`, as shown in the following example request: + +```json +PUT sample-index1 +{ + "mappings": { + "_source": { + "enabled": false + } + } +} +``` +{% include copy-curl.html %} + +Disabling the `_source` field can impact the availability of certain features, such as the `update`, `update_by_query`, and `reindex` APIs, as well as the ability to debug queries or aggregations using the original indexed document. +{: .warning} + +## Including or excluding fields + +You can selectively control the contents of the `_source` field by using the `includes` and `excludes` parameters. This allows you to prune the stored `_source` field after it is indexed but before it is saved, as shown in the following example request: + +```json +PUT logs +{ + "mappings": { + "_source": { + "includes": [ + "*.count", + "meta.*" + ], + "excludes": [ + "meta.description", + "meta.other.*" + ] + } + } +} +``` +{% include copy-curl.html %} + +These fields are not stored in the `_source`, but you can still search them because the data remains indexed. From 2b2b97134035a02345d3291e33a49ed6f8babb4e Mon Sep 17 00:00:00 2001 From: leanneeliatra <131779422+leanneeliatra@users.noreply.github.com> Date: Thu, 29 Aug 2024 20:59:22 +0100 Subject: [PATCH 015/111] Restructuring of the file 'Modifying the YAML files' (#8126) * moved the sections of 'yaml.md' to reflect how they show up in the file system for ease of reading and use Signed-off-by: leanne.laceybyrne@eliatra.com Signed-off-by: leanne.laceybyrne@eliatra.com * Update _security/configuration/yaml.md Signed-off-by: Melissa Vagi --------- Signed-off-by: leanne.laceybyrne@eliatra.com Signed-off-by: Melissa Vagi Co-authored-by: Melissa Vagi --- _security/configuration/yaml.md | 404 ++++++++++++++++---------------- 1 file changed, 200 insertions(+), 204 deletions(-) diff --git a/_security/configuration/yaml.md b/_security/configuration/yaml.md index 3aabce53d5..4bcb8b0460 100644 --- a/_security/configuration/yaml.md +++ b/_security/configuration/yaml.md @@ -15,6 +15,80 @@ Before running [`securityadmin.sh`]({{site.url}}{{site.baseurl}}/security/config The approach we recommend for using the YAML files is to first configure [reserved and hidden resources]({{site.url}}{{site.baseurl}}/security/access-control/api#reserved-and-hidden-resources), such as the `admin` and `kibanaserver` users. Thereafter you can create other users, roles, mappings, action groups, and tenants using OpenSearch Dashboards or the REST API. +## action_groups.yml + +This file contains any initial action groups that you want to add to the Security plugin. + +Aside from some metadata, the default file is empty, because the Security plugin has a number of static action groups that it adds automatically. These static action groups cover a wide variety of use cases and are a great way to get started with the plugin. + +```yml +--- +my-action-group: + reserved: false + hidden: false + allowed_actions: + - "indices:data/write/index*" + - "indices:data/write/update*" + - "indices:admin/mapping/put" + - "indices:data/write/bulk*" + - "read" + - "write" + static: false +_meta: + type: "actiongroups" + config_version: 2 +``` + +## allowlist.yml + +You can use `allowlist.yml` to add any endpoints and HTTP requests to a list of allowed endpoints and requests. If enabled, all users except the super admin are allowed access to only the specified endpoints and HTTP requests, and all other HTTP requests associated with the endpoint are denied. For example, if GET `_cluster/settings` is added to the allow list, users cannot submit PUT requests to `_cluster/settings` to update cluster settings. + +Note that while you can configure access to endpoints this way, for most cases, it is still best to configure permissions using the Security plugin's users and roles, which have more granular settings. + +```yml +--- +_meta: + type: "allowlist" + config_version: 2 + +# Description: +# enabled - feature flag. +# if enabled is false, all endpoints are accessible. +# if enabled is true, all users except the SuperAdmin can only submit the allowed requests to the specified endpoints. +# SuperAdmin can access all APIs. +# SuperAdmin is defined by the SuperAdmin certificate, which is configured with the opensearch.yml setting plugins.security.authcz.admin_dn: +# Refer to the example setting in opensearch.yml to learn more about configuring SuperAdmin. +# +# requests - map of allow listed endpoints and HTTP requests + +#this name must be config +config: + enabled: true + requests: + /_cluster/settings: + - GET + /_cat/nodes: + - GET +``` + +To enable PUT requests to cluster settings, add PUT to the list of allowed operations under `/_cluster/settings`. + +```yml +requests: + /_cluster/settings: + - GET + - PUT +``` + +You can also add custom indexes to the allow list. `allowlist.yml` doesn't support wildcards, so you must manually specify all of the indexes you want to add. + +```yml +requests: # Only allow GET requests to /sample-index1/_doc/1 and /sample-index2/_doc/1 + /sample-index1/_doc/1: + - GET + /sample-index2/_doc/1: + - GET +``` ## internal_users.yml @@ -92,196 +166,24 @@ snapshotrestore: description: "Demo snapshotrestore user" ``` -## opensearch.yml - -In addition to many OpenSearch settings, this file contains paths to TLS certificates and their attributes, such as distinguished names and trusted certificate authorities. - -```yml -plugins.security.ssl.transport.pemcert_filepath: esnode.pem -plugins.security.ssl.transport.pemkey_filepath: esnode-key.pem -plugins.security.ssl.transport.pemtrustedcas_filepath: root-ca.pem -plugins.security.ssl.transport.enforce_hostname_verification: false -plugins.security.ssl.http.enabled: true -plugins.security.ssl.http.pemcert_filepath: esnode.pem -plugins.security.ssl.http.pemkey_filepath: esnode-key.pem -plugins.security.ssl.http.pemtrustedcas_filepath: root-ca.pem -plugins.security.allow_unsafe_democertificates: true -plugins.security.allow_default_init_securityindex: true -plugins.security.authcz.admin_dn: - - CN=kirk,OU=client,O=client,L=test, C=de - -plugins.security.audit.type: internal_opensearch -plugins.security.enable_snapshot_restore_privilege: true -plugins.security.check_snapshot_restore_write_privileges: true -plugins.security.cache.ttl_minutes: 60 -plugins.security.restapi.roles_enabled: ["all_access", "security_rest_api_access"] -plugins.security.system_indices.enabled: true -plugins.security.system_indices.indices: [".opendistro-alerting-config", ".opendistro-alerting-alert*", ".opendistro-anomaly-results*", ".opendistro-anomaly-detector*", ".opendistro-anomaly-checkpoints", ".opendistro-anomaly-detection-state", ".opendistro-reports-*", ".opendistro-notifications-*", ".opendistro-notebooks", ".opendistro-asynchronous-search-response*"] -node.max_local_storage_nodes: 3 -``` - -For a full list of `opensearch.yml` Security plugin settings, see [Security settings]({{site.url}}{{site.baseurl}}/install-and-configure/configuring-opensearch/security-settings/). -{: .note} - -### Refining your configuration - -The `plugins.security.allow_default_init_securityindex` setting, when set to `true`, sets the Security plugin to its default security settings if an attempt to create the security index fails when OpenSearch launches. Default security settings are stored in YAML files contained in the `opensearch-project/security/config` directory. By default, this setting is `false`. - -```yml -plugins.security.allow_default_init_securityindex: true -``` - -An authentication cache for the Security plugin exists to help speed up authentication by temporarily storing user objects returned from the backend so that the Security plugin is not required to make repeated requests for them. To determine how long it takes for caching to time out, you can use the `plugins.security.cache.ttl_minutes` property to set a value in minutes. The default is `60`. You can disable caching by setting the value to `0`. - -```yml -plugins.security.cache.ttl_minutes: 60 -``` - -### Enabling user access to system indexes - -Mapping a system index permission to a user allows that user to modify the system index specified in the permission's name (the one exception is the Security plugin's [system index]({{site.url}}{{site.baseurl}}/security/configuration/system-indices/)). The `plugins.security.system_indices.permission.enabled` setting provides a way for administrators to make this permission available for or hidden from role mapping. - -When set to `true`, the feature is enabled and users with permission to modify roles can create roles that include permissions that grant access to system indexes: - -```yml -plugins.security.system_indices.permission.enabled: true -``` - -When set to `false`, the permission is disabled and only admins with an admin certificate can make changes to system indexes. By default, the permission is set to `false` in a new cluster. - -To learn more about system index permissions, see [System index permissions]({{site.url}}{{site.baseurl}}/security/access-control/permissions/#system-index-permissions). - - -### Password settings - -If you want to run your users' passwords against some validation, specify a regular expression (regex) in this file. You can also include an error message that loads when passwords don't pass validation. The following example demonstrates how to include a regex so OpenSearch requires new passwords to be a minimum of eight characters with at least one uppercase, one lowercase, one digit, and one special character. - -Note that OpenSearch validates only users and passwords created through OpenSearch Dashboards or the REST API. - -```yml -plugins.security.restapi.password_validation_regex: '(?=.*[A-Z])(?=.*[^a-zA-Z\d])(?=.*[0-9])(?=.*[a-z]).{8,}' -plugins.security.restapi.password_validation_error_message: "Password must be minimum 8 characters long and must contain at least one uppercase letter, one lowercase letter, one digit, and one special character." -``` - -In addition, a score-based password strength estimator allows you to set a threshold for password strength when creating a new internal user or updating a user's password. This feature makes use of the [zxcvbn library](https://github.com/dropbox/zxcvbn) to apply a policy that emphasizes a password's complexity rather than its capacity to meet traditional criteria such as uppercase keys, numerals, and special characters. - -For information about defining users, see [Defining users]({{site.url}}{{site.baseurl}}/security/access-control/users-roles/#defining-users). - -This feature is not compatible with users specified as reserved. For information about reserved resources, see [Reserved and hidden resources]({{site.url}}{{site.baseurl}}/security/access-control/api#reserved-and-hidden-resources). -{: .important } - -Score-based password strength requires two settings to configure the feature. The following table describes the two settings. - -| Setting | Description | -| :--- | :--- | -| `plugins.security.restapi.password_min_length` | Sets the minimum number of characters for the password length. The default is `8`. This is also the minimum. | -| `plugins.security.restapi.password_score_based_validation_strength` | Sets a threshold to determine whether the password is strong or weak. There are four values that represent a threshold's increasing complexity.
`fair`--A very "guessable" password: provides protection from throttled online attacks.
`good`--A somewhat guessable password: provides protection from unthrottled online attacks.
`strong`--A safely "unguessable" password: provides moderate protection from an offline, slow-hash scenario.
`very_strong`--A very unguessable password: provides strong protection from an offline, slow-hash scenario. | - -The following example shows the settings configured for the `opensearch.yml` file and enabling a password with a minimum of 10 characters and a threshold requiring the highest strength: - -```yml -plugins.security.restapi.password_min_length: 10 -plugins.security.restapi.password_score_based_validation_strength: very_strong -``` - -When you try to create a user with a password that doesn't reach the specified threshold, the system generates a "weak password" warning, indicating that the password needs to be modified before you can save the user. - -The following example shows the response from the [Create user]({{site.url}}{{site.baseurl}}/security/access-control/api/#create-user) API when the password is weak: - -```json -{ - "status": "error", - "reason": "Weak password" -} -``` - -## allowlist.yml +## nodes_dn.yml -You can use `allowlist.yml` to add any endpoints and HTTP requests to a list of allowed endpoints and requests. If enabled, all users except the super admin are allowed access to only the specified endpoints and HTTP requests, and all other HTTP requests associated with the endpoint are denied. For example, if GET `_cluster/settings` is added to the allow list, users cannot submit PUT requests to `_cluster/settings` to update cluster settings. +`nodes_dn.yml` lets you add certificates' [distinguished names (DNs)]({{site.url}}{{site.baseurl}}/security/configuration/generate-certificates/#add-distinguished-names-to-opensearchyml) to an allow list to enable communication between any number of nodes or clusters. For example, a node that has the DN `CN=node1.example.com` in its allow list accepts communication from any other node or certificate that uses that DN. -Note that while you can configure access to endpoints this way, for most cases, it is still best to configure permissions using the Security plugin's users and roles, which have more granular settings. +The DNs get indexed into a [system index]({{site.url}}{{site.baseurl}}/security/configuration/system-indices) that only a super admin or an admin with a Transport Layer Security (TLS) certificate can access. If you want to programmatically add DNs to your allow lists, use the [REST API]({{site.url}}{{site.baseurl}}/security/access-control/api/#distinguished-names). ```yml --- _meta: - type: "allowlist" + type: "nodesdn" config_version: 2 -# Description: -# enabled - feature flag. -# if enabled is false, all endpoints are accessible. -# if enabled is true, all users except the SuperAdmin can only submit the allowed requests to the specified endpoints. -# SuperAdmin can access all APIs. -# SuperAdmin is defined by the SuperAdmin certificate, which is configured with the opensearch.yml setting plugins.security.authcz.admin_dn: -# Refer to the example setting in opensearch.yml to learn more about configuring SuperAdmin. -# -# requests - map of allow listed endpoints and HTTP requests - -#this name must be config -config: - enabled: true - requests: - /_cluster/settings: - - GET - /_cat/nodes: - - GET -``` - -To enable PUT requests to cluster settings, add PUT to the list of allowed operations under `/_cluster/settings`. - -```yml -requests: - /_cluster/settings: - - GET - - PUT -``` - -You can also add custom indexes to the allow list. `allowlist.yml` doesn't support wildcards, so you must manually specify all of the indexes you want to add. - -```yml -requests: # Only allow GET requests to /sample-index1/_doc/1 and /sample-index2/_doc/1 - /sample-index1/_doc/1: - - GET - /sample-index2/_doc/1: - - GET -``` - - -## roles.yml - -This file contains any initial roles that you want to add to the Security plugin. Aside from some metadata, the default file is empty, because the Security plugin has a number of static roles that it adds automatically. - -```yml ---- -complex-role: - reserved: false - hidden: false - cluster_permissions: - - "read" - - "cluster:monitor/nodes/stats" - - "cluster:monitor/task/get" - index_permissions: - - index_patterns: - - "opensearch_dashboards_sample_data_*" - dls: "{\"match\": {\"FlightDelay\": true}}" - fls: - - "~FlightNum" - masked_fields: - - "Carrier" - allowed_actions: - - "read" - tenant_permissions: - - tenant_patterns: - - "analyst_*" - allowed_actions: - - "kibana_all_write" - static: false -_meta: - type: "roles" - config_version: 2 +# Define nodesdn mapping name and corresponding values +# cluster1: +# nodes_dn: +# - CN=*.example.com ``` - ## roles_mapping.yml ```yml @@ -359,28 +261,37 @@ kibana_server: and_backend_roles: [] ``` +## roles.yml -## action_groups.yml - -This file contains any initial action groups that you want to add to the Security plugin. - -Aside from some metadata, the default file is empty, because the Security plugin has a number of static action groups that it adds automatically. These static action groups cover a wide variety of use cases and are a great way to get started with the plugin. +This file contains any initial roles that you want to add to the Security plugin. Aside from some metadata, the default file is empty, because the Security plugin has a number of static roles that it adds automatically. ```yml --- -my-action-group: +complex-role: reserved: false hidden: false - allowed_actions: - - "indices:data/write/index*" - - "indices:data/write/update*" - - "indices:admin/mapping/put" - - "indices:data/write/bulk*" + cluster_permissions: - "read" - - "write" + - "cluster:monitor/nodes/stats" + - "cluster:monitor/task/get" + index_permissions: + - index_patterns: + - "opensearch_dashboards_sample_data_*" + dls: "{\"match\": {\"FlightDelay\": true}}" + fls: + - "~FlightNum" + masked_fields: + - "Carrier" + allowed_actions: + - "read" + tenant_permissions: + - tenant_patterns: + - "analyst_*" + allowed_actions: + - "kibana_all_write" static: false _meta: - type: "actiongroups" + type: "roles" config_version: 2 ``` @@ -400,20 +311,105 @@ admin_tenant: description: "Demo tenant for admin user" ``` -## nodes_dn.yml +## opensearch.yml -`nodes_dn.yml` lets you add certificates' [distinguished names (DNs)]({{site.url}}{{site.baseurl}}/security/configuration/generate-certificates/#add-distinguished-names-to-opensearchyml) an allow list to enable communication between any number of nodes and/or clusters. For example, a node that has the DN `CN=node1.example.com` in its allow list accepts communication from any other node or certificate that uses that DN. +In addition to many OpenSearch settings, this file contains paths to TLS certificates and their attributes, such as distinguished names and trusted certificate authorities. -The DNs get indexed into a [system index]({{site.url}}{{site.baseurl}}/security/configuration/system-indices) that only a super admin or an admin with a Transport Layer Security (TLS) certificate can access. If you want to programmatically add DNs to your allow lists, use the [REST API]({{site.url}}{{site.baseurl}}/security/access-control/api/#distinguished-names). +```yml +plugins.security.ssl.transport.pemcert_filepath: esnode.pem +plugins.security.ssl.transport.pemkey_filepath: esnode-key.pem +plugins.security.ssl.transport.pemtrustedcas_filepath: root-ca.pem +plugins.security.ssl.transport.enforce_hostname_verification: false +plugins.security.ssl.http.enabled: true +plugins.security.ssl.http.pemcert_filepath: esnode.pem +plugins.security.ssl.http.pemkey_filepath: esnode-key.pem +plugins.security.ssl.http.pemtrustedcas_filepath: root-ca.pem +plugins.security.allow_unsafe_democertificates: true +plugins.security.allow_default_init_securityindex: true +plugins.security.authcz.admin_dn: + - CN=kirk,OU=client,O=client,L=test, C=de + +plugins.security.audit.type: internal_opensearch +plugins.security.enable_snapshot_restore_privilege: true +plugins.security.check_snapshot_restore_write_privileges: true +plugins.security.cache.ttl_minutes: 60 +plugins.security.restapi.roles_enabled: ["all_access", "security_rest_api_access"] +plugins.security.system_indices.enabled: true +plugins.security.system_indices.indices: [".opendistro-alerting-config", ".opendistro-alerting-alert*", ".opendistro-anomaly-results*", ".opendistro-anomaly-detector*", ".opendistro-anomaly-checkpoints", ".opendistro-anomaly-detection-state", ".opendistro-reports-*", ".opendistro-notifications-*", ".opendistro-notebooks", ".opendistro-asynchronous-search-response*"] +node.max_local_storage_nodes: 3 +``` + +For a full list of `opensearch.yml` Security plugin settings, see [Security settings]({{site.url}}{{site.baseurl}}/install-and-configure/configuring-opensearch/security-settings/). +{: .note} + +### Refining your configuration + +The `plugins.security.allow_default_init_securityindex` setting, when set to `true`, sets the Security plugin to its default security settings if an attempt to create the security index fails when OpenSearch launches. Default security settings are stored in YAML files contained in the `opensearch-project/security/config` directory. By default, this setting is `false`. ```yml ---- -_meta: - type: "nodesdn" - config_version: 2 +plugins.security.allow_default_init_securityindex: true +``` -# Define nodesdn mapping name and corresponding values -# cluster1: -# nodes_dn: -# - CN=*.example.com +An authentication cache for the Security plugin exists to help speed up authentication by temporarily storing user objects returned from the backend so that the Security plugin is not required to make repeated requests for them. To determine how long it takes for caching to time out, you can use the `plugins.security.cache.ttl_minutes` property to set a value in minutes. The default is `60`. You can disable caching by setting the value to `0`. + +```yml +plugins.security.cache.ttl_minutes: 60 +``` + +### Enabling user access to system indexes + +Mapping a system index permission to a user allows that user to modify the system index specified in the permission's name (the one exception is the Security plugin's [system index]({{site.url}}{{site.baseurl}}/security/configuration/system-indices/)). The `plugins.security.system_indices.permission.enabled` setting provides a way for administrators to make this permission available for or hidden from role mapping. + +When set to `true`, the feature is enabled and users with permission to modify roles can create roles that include permissions that grant access to system indexes: + +```yml +plugins.security.system_indices.permission.enabled: true +``` + +When set to `false`, the permission is disabled and only admins with an admin certificate can make changes to system indexes. By default, the permission is set to `false` in a new cluster. + +To learn more about system index permissions, see [System index permissions]({{site.url}}{{site.baseurl}}/security/access-control/permissions/#system-index-permissions). + + +### Password settings + +If you want to run your users' passwords against some validation, specify a regular expression (regex) in this file. You can also include an error message that loads when passwords don't pass validation. The following example demonstrates how to include a regex so OpenSearch requires new passwords to be a minimum of eight characters with at least one uppercase, one lowercase, one digit, and one special character. + +Note that OpenSearch validates only users and passwords created through OpenSearch Dashboards or the REST API. + +```yml +plugins.security.restapi.password_validation_regex: '(?=.*[A-Z])(?=.*[^a-zA-Z\d])(?=.*[0-9])(?=.*[a-z]).{8,}' +plugins.security.restapi.password_validation_error_message: "Password must be minimum 8 characters long and must contain at least one uppercase letter, one lowercase letter, one digit, and one special character." +``` + +In addition, a score-based password strength estimator allows you to set a threshold for password strength when creating a new internal user or updating a user's password. This feature makes use of the [zxcvbn library](https://github.com/dropbox/zxcvbn) to apply a policy that emphasizes a password's complexity rather than its capacity to meet traditional criteria such as uppercase keys, numerals, and special characters. + +For information about defining users, see [Defining users]({{site.url}}{{site.baseurl}}/security/access-control/users-roles/#defining-users). + +This feature is not compatible with users specified as reserved. For information about reserved resources, see [Reserved and hidden resources]({{site.url}}{{site.baseurl}}/security/access-control/api#reserved-and-hidden-resources). +{: .important } + +Score-based password strength requires two settings to configure the feature. The following table describes the two settings. + +| Setting | Description | +| :--- | :--- | +| `plugins.security.restapi.password_min_length` | Sets the minimum number of characters for the password length. The default is `8`. This is also the minimum. | +| `plugins.security.restapi.password_score_based_validation_strength` | Sets a threshold to determine whether the password is strong or weak. There are four values that represent a threshold's increasing complexity.
`fair`--A very "guessable" password: provides protection from throttled online attacks.
`good`--A somewhat guessable password: provides protection from unthrottled online attacks.
`strong`--A safely "unguessable" password: provides moderate protection from an offline, slow-hash scenario.
`very_strong`--A very unguessable password: provides strong protection from an offline, slow-hash scenario. | + +The following example shows the settings configured for the `opensearch.yml` file and enabling a password with a minimum of 10 characters and a threshold requiring the highest strength: + +```yml +plugins.security.restapi.password_min_length: 10 +plugins.security.restapi.password_score_based_validation_strength: very_strong +``` + +When you try to create a user with a password that doesn't reach the specified threshold, the system generates a "weak password" warning, indicating that the password needs to be modified before you can save the user. + +The following example shows the response from the [Create user]({{site.url}}{{site.baseurl}}/security/access-control/api/#create-user) API when the password is weak: + +```json +{ + "status": "error", + "reason": "Weak password" +} ``` From 98e8788ffa42d5de0e1d65c36cdba89e08980049 Mon Sep 17 00:00:00 2001 From: Melissa Vagi Date: Thu, 29 Aug 2024 17:29:00 -0600 Subject: [PATCH 016/111] Revise Amazon S3 data source (#8109) * Revise text for clarity and conciseness Signed-off-by: Melissa Vagi * Update _dashboards/management/S3-data-source.md Signed-off-by: Melissa Vagi * Update _dashboards/management/S3-data-source.md Signed-off-by: Melissa Vagi * Update _dashboards/management/S3-data-source.md Signed-off-by: Melissa Vagi * Update _dashboards/management/S3-data-source.md Signed-off-by: Melissa Vagi * Update _dashboards/management/S3-data-source.md Signed-off-by: Melissa Vagi * Update _dashboards/management/S3-data-source.md Co-authored-by: Naarcha-AWS <97990722+Naarcha-AWS@users.noreply.github.com> Signed-off-by: Melissa Vagi * Update _dashboards/management/S3-data-source.md Co-authored-by: Nathan Bower Signed-off-by: Melissa Vagi * Update _dashboards/management/S3-data-source.md Signed-off-by: Melissa Vagi * Update _dashboards/management/S3-data-source.md Co-authored-by: Nathan Bower Signed-off-by: Melissa Vagi --------- Signed-off-by: Melissa Vagi Co-authored-by: Naarcha-AWS <97990722+Naarcha-AWS@users.noreply.github.com> Co-authored-by: Nathan Bower --- _dashboards/management/S3-data-source.md | 46 ++++++++++-------------- 1 file changed, 18 insertions(+), 28 deletions(-) diff --git a/_dashboards/management/S3-data-source.md b/_dashboards/management/S3-data-source.md index 585edeac81..1a7cc579b0 100644 --- a/_dashboards/management/S3-data-source.md +++ b/_dashboards/management/S3-data-source.md @@ -10,51 +10,41 @@ has_children: true Introduced 2.11 {: .label .label-purple } -Starting with OpenSearch 2.11, you can connect OpenSearch to your Amazon Simple Storage Service (Amazon S3) data source using the OpenSearch Dashboards UI. You can then query that data, optimize query performance, define tables, and integrate your S3 data within a single UI. +You can connect OpenSearch to your Amazon Simple Storage Service (Amazon S3) data source using the OpenSearch Dashboards interface and then query that data, optimize query performance, define tables, and integrate your S3 data. ## Prerequisites -To connect data from Amazon S3 to OpenSearch using OpenSearch Dashboards, you must have: +Before connecting a data source, verify that the following requirements are met: -- Access to Amazon S3 and the [AWS Glue Data Catalog](https://github.com/opensearch-project/sql/blob/main/docs/user/ppl/admin/connectors/s3glue_connector.rst#id2). -- Access to OpenSearch and OpenSearch Dashboards. -- An understanding of OpenSearch data source and connector concepts. See the [developer documentation](https://github.com/opensearch-project/sql/blob/main/docs/user/ppl/admin/datasources.rst#introduction) for information about these concepts. +- You have access to Amazon S3 and the [AWS Glue Data Catalog](https://github.com/opensearch-project/sql/blob/main/docs/user/ppl/admin/connectors/s3glue_connector.rst#id2). +- You have access to OpenSearch and OpenSearch Dashboards. +- You have an understanding of OpenSearch data source and connector concepts. See the [developer documentation](https://github.com/opensearch-project/sql/blob/main/docs/user/ppl/admin/datasources.rst#introduction) for more information. -## Connect your Amazon S3 data source +## Connect your data source -To connect your Amazon S3 data source, follow these steps: +To connect your data source, follow these steps: -1. From the OpenSearch Dashboards main menu, select **Management** > **Data sources**. -2. On the **Data sources** page, select **New data source** > **S3**. An example UI is shown in the following image. +1. From the OpenSearch Dashboards main menu, go to **Management** > **Dashboards Management** > **Data sources**. +2. On the **Data sources** page, select **Create data source connection** > **Amazon S3**. +3. On the **Configure Amazon S3 data source** page, enter the data source, authentication details, and permissions. +4. Select the **Review Configuration** button to verify the connection details. +5. Select the **Connect to Amazon S3** button to establish a connection. - Amazon S3 data sources UI +## Manage your data source -3. On the **Configure Amazon S3 data source** page, enter the required **Data source details**, **AWS Glue authentication details**, **AWS Glue index store details**, and **Query permissions**. An example UI is shown in the following image. - - Amazon S3 configuration UI - -4. Select the **Review Configuration** button and verify the details. -5. Select the **Connect to Amazon S3** button. - -## Manage your Amazon S3 data source - -Once you've connected your Amazon S3 data source, you can explore your data through the **Manage data sources** tab. The following steps guide you through using this functionality: +To manage your data source, follow these steps: 1. On the **Manage data sources** tab, choose a date source from the list. -2. On that data source's page, you can manage the data source, choose a use case, and manage access controls and configurations. An example UI is shown in the following image. - - Manage data sources UI - -3. (Optional) Explore the Amazon S3 use cases, including querying your data and optimizing query performance. Go to **Next steps** to learn more about each use case. +2. On the page for the data source, you can manage the data source, choose a use case, and configure access controls. +3. (Optional) Explore the Amazon S3 use cases, including querying your data and optimizing query performance. Refer to the [**Next steps**](#next-steps) section to learn more about each use case. ## Limitations -This feature is still under development, including the data integration functionality. For real-time updates, see the [developer documentation on GitHub](https://github.com/opensearch-project/opensearch-spark/blob/main/docs/index.md#limitations). +This feature is currently under development, including the data integration functionality. For up-to-date information, refer to the [developer documentation on GitHub](https://github.com/opensearch-project/opensearch-spark/blob/main/docs/index.md#limitations). ## Next steps - Learn about [querying your data in Data Explorer]({{site.url}}{{site.baseurl}}/dashboards/management/query-data-source/) through OpenSearch Dashboards. -- Learn about ways to [optimize the query performance of your external data sources]({{site.url}}{{site.baseurl}}/dashboards/management/accelerate-external-data/), such as Amazon S3, through Query Workbench. +- Learn about [optimizing the query performance of your external data sources]({{site.url}}{{site.baseurl}}/dashboards/management/accelerate-external-data/), such as Amazon S3, through Query Workbench. - Learn about [Amazon S3 and AWS Glue Data Catalog](https://github.com/opensearch-project/sql/blob/main/docs/user/ppl/admin/connectors/s3glue_connector.rst) and the APIS used with Amazon S3 data sources, including configuration settings and query examples. - Learn about [managing your indexes]({{site.url}}{{site.baseurl}}/dashboards/im-dashboards/index/) through OpenSearch Dashboards. - From 67682f2f7997606ed6949b81634c19cc772804f1 Mon Sep 17 00:00:00 2001 From: Melissa Vagi Date: Fri, 30 Aug 2024 09:51:34 -0600 Subject: [PATCH 017/111] Delete outdated images (#8130) * Delete outdated images Signed-off-by: Melissa Vagi * Delete outdated images Signed-off-by: Melissa Vagi --------- Signed-off-by: Melissa Vagi --- .../management/accelerate-external-data.md | 46 ++++++------------- _dashboards/management/query-data-source.md | 25 +++------- 2 files changed, 21 insertions(+), 50 deletions(-) diff --git a/_dashboards/management/accelerate-external-data.md b/_dashboards/management/accelerate-external-data.md index 00e4600ffd..6d1fa030e4 100644 --- a/_dashboards/management/accelerate-external-data.md +++ b/_dashboards/management/accelerate-external-data.md @@ -12,55 +12,37 @@ Introduced 2.11 {: .label .label-purple } -Query performance can be slow when using external data sources for reasons such as network latency, data transformation, and data volume. You can optimize your query performance by using OpenSearch indexes, such as a skipping index or a covering index. A _skipping index_ uses skip acceleration methods, such as partition, minimum and maximum values, and value sets, to ingest and create compact aggregate data structures. This makes them an economical option for direct querying scenarios. A _covering index_ ingests all or some of the data from the source into OpenSearch and makes it possible to use all OpenSearch Dashboards and plugin functionality. See the [Flint Index Reference Manual](https://github.com/opensearch-project/opensearch-spark/blob/main/docs/index.md) for comprehensive guidance on this feature's indexing process. +Query performance can be slow when using external data sources for reasons such as network latency, data transformation, and data volume. You can optimize your query performance by using OpenSearch indexes, such as a skipping index or a covering index. + +A _skipping index_ uses skip acceleration methods, such as partition, minimum and maximum values, and value sets, to ingest and create compact aggregate data structures. This makes them an economical option for direct querying scenarios. + +A _covering index_ ingests all or some of the data from the source into OpenSearch and makes it possible to use all OpenSearch Dashboards and plugin functionality. See the [Flint Index Reference Manual](https://github.com/opensearch-project/opensearch-spark/blob/main/docs/index.md) for comprehensive guidance on this feature's indexing process. ## Data sources use case: Accelerate performance To get started with the **Accelerate performance** use case available in **Data sources**, follow these steps: 1. Go to **OpenSearch Dashboards** > **Query Workbench** and select your Amazon S3 data source from the **Data sources** dropdown menu in the upper-left corner. -2. From the left-side navigation menu, select a database. An example using the `http_logs` database is shown in the following image. - - Query Workbench accelerate data UI - +2. From the left-side navigation menu, select a database. 3. View the results in the table and confirm that you have the desired data. 4. Create an OpenSearch index by following these steps: - 1. Select the **Accelerate data** button. A pop-up window appears. An example is shown in the following image. - - Accelerate data pop-up window - + 1. Select the **Accelerate data** button. A pop-up window appears. 2. Enter your details in **Select data fields**. In the **Database** field, select the desired acceleration index: **Skipping index** or **Covering index**. A _skipping index_ uses skip acceleration methods, such as partition, min/max, and value sets, to ingest data using compact aggregate data structures. This makes them an economical option for direct querying scenarios. A _covering index_ ingests all or some of the data from the source into OpenSearch and makes it possible to use all OpenSearch Dashboards and plugin functionality. - -5. Under **Index settings**, enter the information for your acceleration index. For information about naming, select **Help**. Note that an Amazon S3 table can only have one skipping index at a time. An example is shown in the following image. - - Skipping index settings +5. Under **Index settings**, enter the information for your acceleration index. For information about naming, select **Help**. Note that an Amazon S3 table can only have one skipping index at a time. ### Define skipping index settings -1. Under **Skipping index definition**, select the **Add fields** button to define the skipping index acceleration method and choose the fields you want to add. An example is shown in the following image. - - Skipping index add fields - +1. Under **Skipping index definition**, select the **Add fields** button to define the skipping index acceleration method and choose the fields you want to add. 2. Select the **Copy Query to Editor** button to apply your skipping index settings. -3. View the skipping index query details in the table pane and then select the **Run** button. Your index is added to the left-side navigation menu containing the list of your databases. An example is shown in the following image. - - Run a skippping or covering index UI +3. View the skipping index query details in the table pane and then select the **Run** button. Your index is added to the left-side navigation menu containing the list of your databases. ### Define covering index settings -1. Under **Index settings**, enter a valid index name. Note that each Amazon S3 table can have multiple covering indexes. An example is shown in the following image. - - Covering index settings - -2. Once you have added the index name, define the covering index fields by selecting `(add fields here)` under **Covering index definition**. An example is shown in the following image. - - Covering index field naming - +1. Under **Index settings**, enter a valid index name. Note that each Amazon S3 table can have multiple covering indexes. +2. Once you have added the index name, define the covering index fields by selecting `(add fields here)` under **Covering index definition**. 3. Select the **Copy Query to Editor** button to apply your covering index settings. -4. View the covering index query details in the table pane and then select the **Run** button. Your index is added to the left-side navigation menu containing the list of your databases. An example UI is shown in the following image. - - Run index in Query Workbench +4. View the covering index query details in the table pane and then select the **Run** button. Your index is added to the left-side navigation menu containing the list of your databases. ## Limitations -This feature is still under development, so there are some limitations. For real-time updates, see the [developer documentation on GitHub](https://github.com/opensearch-project/opensearch-spark/blob/main/docs/index.md#limitations). +This feature is still under development, so there are some limitations. For real-time updates, refer to the [developer documentation on GitHub](https://github.com/opensearch-project/opensearch-spark/blob/main/docs/index.md#limitations). diff --git a/_dashboards/management/query-data-source.md b/_dashboards/management/query-data-source.md index f1496b3e17..a3392c073e 100644 --- a/_dashboards/management/query-data-source.md +++ b/_dashboards/management/query-data-source.md @@ -11,7 +11,7 @@ has_children: false Introduced 2.11 {: .label .label-purple } -This tutorial guides you through using the **Query data** use case for querying and visualizing your Amazon Simple Storage Service (Amazon S3) data using OpenSearch Dashboards. +This tutorial guides you through using the **Query data** use case for querying and visualizing your Amazon Simple Storage Service (Amazon S3) data using OpenSearch Dashboards. ## Prerequisites @@ -22,15 +22,9 @@ You must be using the `opensearch-security` plugin and have the appropriate role To get started, follow these steps: 1. On the **Manage data sources** page, select your data source from the list. -2. On the data source's detail page, select the **Query data** card. This option takes you to the **Observability** > **Logs** page, which is shown in the following image. - - Observability Logs UI - +2. On the data source's detail page, select the **Query data** card. This option takes you to the **Observability** > **Logs** page. 3. Select the **Event Explorer** button. This option creates and saves frequently searched queries and visualizations using [Piped Processing Language (PPL)]({{site.url}}{{site.baseurl}}/search-plugins/sql/ppl/index/) or [SQL]({{site.url}}{{site.baseurl}}/search-plugins/sql/index/), which connects to Spark SQL. -4. Select the Amazon S3 data source from the dropdown menu in the upper-left corner. An example is shown in the following image. - - Observability Logs Amazon S3 dropdown menu - +4. Select the Amazon S3 data source from the dropdown menu in the upper-left corner. 5. Enter the query in the **Enter PPL query** field. Note that the default language is SQL. To change the language, select PPL from the dropdown menu. 6. Select the **Search** button. The **Query Processing** message is shown, confirming that your query is being processed. 7. View the results, which are listed in a table on the **Events** tab. On this page, details such as available fields, source, and time are shown in a table format. @@ -40,10 +34,7 @@ To get started, follow these steps: To create visualizations, follow these steps: -1. On the **Explorer** page, select the **Visualizations** tab. An example is shown in the following image. - - Explorer Amazon S3 visualizations UI - +1. On the **Explorer** page, select the **Visualizations** tab. 2. Select **Index data to visualize**. This option currently only creates [acceleration indexes]({{site.url}}{{site.baseurl}}/dashboards/management/accelerate-external-data/), which give you views of the data visualizations from the **Visualizations** tab. To create a visualization of your Amazon S3 data, go to **Discover**. See the [Discover documentation]({{site.url}}{{site.baseurl}}/dashboards/discover/index-discover/) for information and a tutorial. ## Use Query Workbench with your Amazon S3 data source @@ -53,14 +44,12 @@ To create visualizations, follow these steps: To use Query Workbench with your Amazon S3 data, follow these steps: 1. From the OpenSearch Dashboards main menu, select **OpenSearch Plugins** > **Query Workbench**. -2. From the **Data Sources** dropdown menu in the upper-left corner, choose your Amazon S3 data source. Your data begins loading the databases that are part of your data source. An example is shown in the following image. - - Query Workbench Amazon S3 data loading UI - +2. From the **Data Sources** dropdown menu in the upper-left corner, choose your Amazon S3 data source. Your data begins loading the databases that are part of your data source. 3. View the databases listed in the left-side navigation menu and select a database to view its details. Any information about acceleration indexes is listed under **Acceleration index destination**. 4. Choose the **Describe Index** button to learn more about how data is stored in that particular index. 5. Choose the **Drop index** button to delete and clear both the OpenSearch index and the Amazon S3 Spark job that refreshes the data. -6. Enter your SQL query and select **Run**. +6. Enter your SQL query and select **Run**. + ## Next steps - Learn about [accelerating the query performance of your external data sources]({{site.url}}{{site.baseurl}}/dashboards/management/accelerate-external-data/). From 2648797663b2f86a4329051d22257d25f33aca96 Mon Sep 17 00:00:00 2001 From: kolchfa-aws <105444904+kolchfa-aws@users.noreply.github.com> Date: Fri, 30 Aug 2024 18:12:16 -0400 Subject: [PATCH 018/111] Update network-settings.md (#8138) Signed-off-by: kolchfa-aws <105444904+kolchfa-aws@users.noreply.github.com> --- .../configuring-opensearch/network-settings.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/_install-and-configure/configuring-opensearch/network-settings.md b/_install-and-configure/configuring-opensearch/network-settings.md index f96dde97e1..dc61ccc49b 100644 --- a/_install-and-configure/configuring-opensearch/network-settings.md +++ b/_install-and-configure/configuring-opensearch/network-settings.md @@ -51,7 +51,7 @@ OpenSearch supports the following advanced network settings for transport commun ## Selecting the transport -The default OpenSearch transport is provided by the `transport-netty4` module and uses the [Netty 4](https://netty.io/) engine for both internal TCP-based communication between nodes in the cluster and external HTTP-based communication with clients. This communication is fully asynchronous and non-blocking. However, there are other transport plugins available that can be used interchangeably: +The default OpenSearch transport is provided by the `transport-netty4` module and uses the [Netty 4](https://netty.io/) engine for both internal TCP-based communication between nodes in the cluster and external HTTP-based communication with clients. This communication is fully asynchronous and non-blocking. The following table lists other available transport plugins that can be used interchangeably. Plugin | Description :---------- | :-------- From 0427252ba7b2dad26b6515cd04a47520f480581b Mon Sep 17 00:00:00 2001 From: Naarcha-AWS <97990722+Naarcha-AWS@users.noreply.github.com> Date: Sun, 1 Sep 2024 23:49:19 -0500 Subject: [PATCH 019/111] Add common operations section to User Guide. (#7974) * Add common operations section to User Guide. Signed-off-by: Archer * Fix link Signed-off-by: Archer * Apply suggestions from code review Signed-off-by: Naarcha-AWS <97990722+Naarcha-AWS@users.noreply.github.com> --------- Signed-off-by: Archer Signed-off-by: Naarcha-AWS <97990722+Naarcha-AWS@users.noreply.github.com> --- .../choosing-a-workload.md | 2 +- .../common-operations.md | 181 ++++++++++++++++++ 2 files changed, 182 insertions(+), 1 deletion(-) create mode 100644 _benchmark/user-guide/understanding-workloads/common-operations.md diff --git a/_benchmark/user-guide/understanding-workloads/choosing-a-workload.md b/_benchmark/user-guide/understanding-workloads/choosing-a-workload.md index d7ae48ad0a..6016caee0a 100644 --- a/_benchmark/user-guide/understanding-workloads/choosing-a-workload.md +++ b/_benchmark/user-guide/understanding-workloads/choosing-a-workload.md @@ -18,7 +18,7 @@ Consider the following criteria when deciding which workload would work best for - The cluster's use case. - The data types that your cluster uses compared to the data structure of the documents contained in the workload. Each workload contains an example document so that you can compare data types, or you can view the index mappings and data types in the `index.json` file. -- The query types most commonly used inside your cluster. The `operations/default.json` file contains information about the query types and workload operations. +- The query types most commonly used inside your cluster. The `operations/default.json` file contains information about the query types and workload operations. For a list of common operations, see [Common operations]({{site.url}}{{site.baseurl}}/benchmark/user-guide/understanding-workloads/common-operations/). ## General search clusters diff --git a/_benchmark/user-guide/understanding-workloads/common-operations.md b/_benchmark/user-guide/understanding-workloads/common-operations.md new file mode 100644 index 0000000000..c9fe15c18c --- /dev/null +++ b/_benchmark/user-guide/understanding-workloads/common-operations.md @@ -0,0 +1,181 @@ +--- +layout: default +title: Common operations +nav_order: 16 +grand_parent: User guide +parent: Understanding workloads +--- + +# Common operations + +[Test procedures]({{site.url}}{{site.baseurl}}/benchmark/user-guide/understanding-workloads/anatomy-of-a-workload#_operations-and-_test-procedures) use a variety of operations, found inside the `operations` directory of a workload. This page details the most common operations found inside OpenSearch Benchmark workloads. + +- [Common operations](#common-operations) + - [bulk](#bulk) + - [create-index](#create-index) + - [delete-index](#delete-index) + - [cluster-health](#cluster-health) + - [refresh](#refresh) + - [search](#search) + + +## bulk + + +The `bulk` operation type allows you to run [bulk](/api-reference/document-apis/bulk/) requests as a task. + +The following example shows a `bulk` operation type with a `bulk-size` of `5000` documents: + +```yml +{ + "name": "index-append", + "operation-type": "bulk", + "bulk-size": 5000 +} +``` + + + +## create-index + + +The `create-index` operation runs the [Create Index API](/api-reference/index-apis/create-index/). It supports the following two modes of index creation: + +- Creating all indexes specified in the workloads `indices` section +- Creating one specific index defined within the operation itself + +The following example creates all indexes defined in the `indices` section of the workload. It uses all of the index settings defined in the workload but overrides the number of shards: + +```yml +{ + "name": "create-all-indices", + "operation-type": "create-index", + "settings": { + "index.number_of_shards": 1 + }, + "request-params": { + "wait_for_active_shards": "true" + } +} +``` + +The following example creates a new index with all index settings specified in the operation body: + +```yml +{ + "name": "create-an-index", + "operation-type": "create-index", + "index": "people", + "body": { + "settings": { + "index.number_of_shards": 0 + }, + "mappings": { + "docs": { + "properties": { + "name": { + "type": "text" + } + } + } + } + } +} +``` + + + + +## delete-index + + +The `delete-index` operation runs the [Delete Index API](api-reference/index-apis/delete-index/). Like with the [`create-index`](#create-index) operation, you can delete all indexes found in the `indices` section of the workload or delete one or more indexes based on the string passed in the `index` setting. + +The following example deletes all indexes found in the `indices` section of the workload: + +```yml +{ + "name": "delete-all-indices", + "operation-type": "delete-index" +} +``` + +The following example deletes all `logs_*` indexes: + +```yml +{ + "name": "delete-logs", + "operation-type": "delete-index", + "index": "logs-*", + "only-if-exists": false, + "request-params": { + "expand_wildcards": "all", + "allow_no_indices": "true", + "ignore_unavailable": "true" + } +} +``` + + +## cluster-health + + +The `cluster-health` operation runs the [Cluster Health API](api-reference/cluster-api/cluster-health/), which checks the cluster health status and returns the expected status according to the parameters set for `request-params`. If an unexpected cluster health status is returned, the operation reports a failure. You can use the `--on-error` option in the OpenSearch Benchmark `execute-test` command to control how OpenSearch Benchmark behaves when the health check fails. + +The following example creates a `cluster-health` operation that checks for a `green` health status on any `log-*` indexes: + +```yml +{ + "name": "check-cluster-green", + "operation-type": "cluster-health", + "index": "logs-*", + "request-params": { + "wait_for_status": "green", + "wait_for_no_relocating_shards": "true" + }, + "retry-until-success": true +} + +``` + + +## refresh + + +The `refresh` operation runs the Refresh API. The `operation` returns no metadata. + + +The following example refreshes all `logs-*` indexes: + +```yml +{ + "name": "refresh", + "operation-type": "refresh", + "index": "logs-*" +} +``` + + + +## search + + +The `search` operation runs the [Search API](/api-reference/search/), which you can use to run queries in OpenSearch Benchmark indexes. + +The following example runs a `match_all` query inside the `search` operation: + +```yml +{ + "name": "default", + "operation-type": "search", + "body": { + "query": { + "match_all": {} + } + }, + "request-params": { + "_source_include": "some_field", + "analyze_wildcard": "false" + } +} +``` From e3576fba3eed65b9fa1c635fba591723542bddb5 Mon Sep 17 00:00:00 2001 From: Kunal Kotwani Date: Tue, 3 Sep 2024 07:21:49 -0700 Subject: [PATCH 020/111] Update known limitations for kNN based indexes (#8137) * Update known limitations for kNN based indexes Signed-off-by: Kunal Kotwani * Update _tuning-your-cluster/availability-and-recovery/snapshots/searchable_snapshot.md Signed-off-by: kolchfa-aws <105444904+kolchfa-aws@users.noreply.github.com> --------- Signed-off-by: Kunal Kotwani Signed-off-by: kolchfa-aws <105444904+kolchfa-aws@users.noreply.github.com> Co-authored-by: kolchfa-aws <105444904+kolchfa-aws@users.noreply.github.com> --- .../availability-and-recovery/snapshots/searchable_snapshot.md | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/_tuning-your-cluster/availability-and-recovery/snapshots/searchable_snapshot.md b/_tuning-your-cluster/availability-and-recovery/snapshots/searchable_snapshot.md index 4af25004a7..b9e35b2697 100644 --- a/_tuning-your-cluster/availability-and-recovery/snapshots/searchable_snapshot.md +++ b/_tuning-your-cluster/availability-and-recovery/snapshots/searchable_snapshot.md @@ -108,4 +108,5 @@ The following are known limitations of the searchable snapshots feature: - Many remote object stores charge on a per-request basis for retrieval, so users should closely monitor any costs incurred. - Searching remote data can impact the performance of other queries running on the same node. We recommend that users provision dedicated nodes with the `search` role for performance-critical applications. - For better search performance, consider [force merging]({{site.url}}{{site.baseurl}}/api-reference/index-apis/force-merge/) indexes into a smaller number of segments before taking a snapshot. For the best performance, at the cost of using compute resources prior to snapshotting, force merge your index into one segment. -- We recommend configuring a maximum ratio of remote data to local disk cache size using the `cluster.filecache.remote_data_ratio` setting. A ratio of 5 is a good starting point for most workloads to ensure good query performance. If the ratio is too large, then there may not be sufficient disk space to handle the search workload. For more details on the maximum ratio of remote data, see issue [#11676](https://github.com/opensearch-project/OpenSearch/issues/11676). +- We recommend configuring a maximum ratio of remote data to local disk cache size using the `cluster.filecache.remote_data_ratio` setting. A ratio of 5 is a good starting point for most workloads to ensure good query performance. If the ratio is too large, then there may not be sufficient disk space to handle the search workload. For more details on the maximum ratio of remote data, see issue [#11676](https://github.com/opensearch-project/OpenSearch/issues/11676). +- k-NN native-engine-based indexes using `faiss` and `nmslib` engines are incompatible with searchable snapshots. From 9e7aedc3d11d52fec60513300786c6d2f9ab97a9 Mon Sep 17 00:00:00 2001 From: kkewwei Date: Tue, 3 Sep 2024 22:25:13 +0800 Subject: [PATCH 021/111] Update binary.md (#8142) According the code, the default value of `hasDocValues` is false https://github.com/opensearch-project/OpenSearch/blob/03d9a249e47b99b33c6de3625f43b12bef29c1cb/server/src/main/java/org/opensearch/index/mapper/BinaryFieldMapper.java#L85 Signed-off-by: kkewwei Co-authored-by: kolchfa-aws <105444904+kolchfa-aws@users.noreply.github.com> --- _field-types/supported-field-types/binary.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/_field-types/supported-field-types/binary.md b/_field-types/supported-field-types/binary.md index d6974ad4cf..99d468c1dc 100644 --- a/_field-types/supported-field-types/binary.md +++ b/_field-types/supported-field-types/binary.md @@ -50,5 +50,5 @@ The following table lists the parameters accepted by binary field types. All par Parameter | Description :--- | :--- -`doc_values` | A Boolean value that specifies whether the field should be stored on disk so that it can be used for aggregations, sorting, or scripting. Optional. Default is `true`. -`store` | A Boolean value that specifies whether the field value should be stored and can be retrieved separately from the _source field. Optional. Default is `false`. \ No newline at end of file +`doc_values` | A Boolean value that specifies whether the field should be stored on disk so that it can be used for aggregations, sorting, or scripting. Optional. Default is `false`. +`store` | A Boolean value that specifies whether the field value should be stored and can be retrieved separately from the _source field. Optional. Default is `false`. From a5b230cecdba02b5c4d8f66a4f2d9fa8243f56ec Mon Sep 17 00:00:00 2001 From: Naarcha-AWS <97990722+Naarcha-AWS@users.noreply.github.com> Date: Tue, 3 Sep 2024 13:38:34 -0500 Subject: [PATCH 022/111] Fix broken links (#8147) Closes #8144 Signed-off-by: Naarcha-AWS <97990722+Naarcha-AWS@users.noreply.github.com> --- _data-prepper/pipelines/configuration/sinks/s3.md | 10 +++++----- 1 file changed, 5 insertions(+), 5 deletions(-) diff --git a/_data-prepper/pipelines/configuration/sinks/s3.md b/_data-prepper/pipelines/configuration/sinks/s3.md index 3ff266cccf..6bae749d38 100644 --- a/_data-prepper/pipelines/configuration/sinks/s3.md +++ b/_data-prepper/pipelines/configuration/sinks/s3.md @@ -173,14 +173,14 @@ When you provide your own Avro schema, that schema defines the final structure o In cases where your data is uniform, you may be able to automatically generate a schema. Automatically generated schemas are based on the first event that the codec receives. The schema will only contain keys from this event, and all keys must be present in all events in order to automatically generate a working schema. Automatically generated schemas make all fields nullable. Use the `include_keys` and `exclude_keys` sink configurations to control which data is included in the automatically generated schema. -Avro fields should use a null [union](https://avro.apache.org/docs/1.10.2/spec.html#Unions) because this will allow missing values. Otherwise, all required fields must be present for each event. Use non-nullable fields only when you are certain they exist. +Avro fields should use a null [union](https://avro.apache.org/docs/1.12.0/specification/#unions) because this will allow missing values. Otherwise, all required fields must be present for each event. Use non-nullable fields only when you are certain they exist. Use the following options to configure the codec. Option | Required | Type | Description :--- | :--- | :--- | :--- -`schema` | Yes | String | The Avro [schema declaration](https://avro.apache.org/docs/1.2.0/spec.html#schemas). Not required if `auto_schema` is set to true. -`auto_schema` | No | Boolean | When set to `true`, automatically generates the Avro [schema declaration](https://avro.apache.org/docs/1.2.0/spec.html#schemas) from the first event. +`schema` | Yes | String | The Avro [schema declaration](https://avro.apache.org/docs/1.12.0/specification/#schema-declaration). Not required if `auto_schema` is set to true. +`auto_schema` | No | Boolean | When set to `true`, automatically generates the Avro [schema declaration](https://avro.apache.org/docs/1.12.0/specification/#schema-declaration) from the first event. ### `ndjson` codec @@ -208,8 +208,8 @@ Use the following options to configure the codec. Option | Required | Type | Description :--- | :--- | :--- | :--- -`schema` | Yes | String | The Avro [schema declaration](https://avro.apache.org/docs/current/specification/#schema-declaration). Not required if `auto_schema` is set to true. -`auto_schema` | No | Boolean | When set to `true`, automatically generates the Avro [schema declaration](https://avro.apache.org/docs/current/specification/#schema-declaration) from the first event. +`schema` | Yes | String | The Avro [schema declaration](https://avro.apache.org/docs/1.12.0/specification/#schema-declaration). Not required if `auto_schema` is set to true. +`auto_schema` | No | Boolean | When set to `true`, automatically generates the Avro [schema declaration](https://avro.apache.org/docs/1.12.0/specification/#schema-declaration) from the first event. ### Setting a schema with Parquet From ef8abd7ae007917e37f53b69f21c377db64353da Mon Sep 17 00:00:00 2001 From: leanneeliatra <131779422+leanneeliatra@users.noreply.github.com> Date: Tue, 3 Sep 2024 19:55:57 +0100 Subject: [PATCH 023/111] Addition of full file paths in security documentation (#8113) * added full file paths for security config files Signed-off-by: leanne.laceybyrne@eliatra.com Signed-off-by: leanne.laceybyrne@eliatra.com * added full file paths for security config files Signed-off-by: leanne.laceybyrne@eliatra.com Signed-off-by: leanne.laceybyrne@eliatra.com # Conflicts: # _security/configuration/yaml.md * small edits to full file paths for security config files Signed-off-by: leanne.laceybyrne@eliatra.com Signed-off-by: leanne.laceybyrne@eliatra.com * updates to file paths following tech review Signed-off-by: leanne.laceybyrne@eliatra.com Signed-off-by: leanne.laceybyrne@eliatra.com * Apply suggestions from code review Signed-off-by: Naarcha-AWS <97990722+Naarcha-AWS@users.noreply.github.com> * Apply suggestions from code review Signed-off-by: Naarcha-AWS <97990722+Naarcha-AWS@users.noreply.github.com> * Take into account previous changes Signed-off-by: Naarcha-AWS <97990722+Naarcha-AWS@users.noreply.github.com> * Apply suggestions from code review Signed-off-by: Naarcha-AWS <97990722+Naarcha-AWS@users.noreply.github.com> --------- Signed-off-by: leanne.laceybyrne@eliatra.com Signed-off-by: Naarcha-AWS <97990722+Naarcha-AWS@users.noreply.github.com> Co-authored-by: Naarcha-AWS <97990722+Naarcha-AWS@users.noreply.github.com> --- .../configuring-opensearch/security-settings.md | 2 +- _security/configuration/index.md | 2 +- _security/configuration/security-admin.md | 4 ++-- _security/configuration/yaml.md | 8 +++++--- 4 files changed, 9 insertions(+), 7 deletions(-) diff --git a/_install-and-configure/configuring-opensearch/security-settings.md b/_install-and-configure/configuring-opensearch/security-settings.md index b9c375d208..2ac09a4819 100644 --- a/_install-and-configure/configuring-opensearch/security-settings.md +++ b/_install-and-configure/configuring-opensearch/security-settings.md @@ -9,7 +9,7 @@ nav_order: 40 The Security plugin provides a number of YAML configuration files that are used to store the necessary settings that define the way the Security plugin manages users, roles, and activity within the cluster. For a full list of the Security plugin configuration files, see [Modifying the YAML files]({{site.url}}{{site.baseurl}}/security/configuration/yaml/). -The following sections describe security-related settings in `opensearch.yml`. To learn more about static and dynamic settings, see [Configuring OpenSearch]({{site.url}}{{site.baseurl}}/install-and-configure/configuring-opensearch/index/). +The following sections describe security-related settings in `opensearch.yml`. You can find the `opensearch.yml` in the `/config/opensearch.yml`. To learn more about static and dynamic settings, see [Configuring OpenSearch]({{site.url}}{{site.baseurl}}/install-and-configure/configuring-opensearch/index/). ## Common settings diff --git a/_security/configuration/index.md b/_security/configuration/index.md index 31292c320a..e351e8865f 100644 --- a/_security/configuration/index.md +++ b/_security/configuration/index.md @@ -28,4 +28,4 @@ The Security plugin has several default users, roles, action groups, permissions {: .note } For a full list of `opensearch.yml` Security plugin settings, Security plugin settings, see [Security settings]({{site.url}}{{site.baseurl}}/install-and-configure/configuring-opensearch/security-settings/). -{: .note} \ No newline at end of file +{: .note} diff --git a/_security/configuration/security-admin.md b/_security/configuration/security-admin.md index a03d30fd03..b4d23dce5b 100755 --- a/_security/configuration/security-admin.md +++ b/_security/configuration/security-admin.md @@ -23,13 +23,13 @@ The `securityadmin.sh` script requires SSL/TLS HTTP to be enabled for your OpenS ## A word of caution -If you make changes to the configuration files in `config/opensearch-security`, OpenSearch does _not_ automatically apply these changes. Instead, you must run `securityadmin.sh` to load the updated files into the index. +If you make changes to the configuration files in `config/opensearch-security`, OpenSearch does _not_ automatically apply these changes. Instead, you must run `securityadmin.sh` to load the updated files into the index. The `securityadmin.sh` file can be found in `/plugins/opensearch-security/tools/securityadmin.[sh|bat]`. Running `securityadmin.sh` **overwrites** one or more portions of the `.opendistro_security` index. Run it with extreme care to avoid losing your existing resources. Consider the following example: 1. You initialize the `.opendistro_security` index. 1. You create ten users using the REST API. -1. You decide to create a new [reserved user]({{site.url}}{{site.baseurl}}/security/access-control/api/#reserved-and-hidden-resources) using `internal_users.yml`. +1. You decide to create a new [reserved user]({{site.url}}{{site.baseurl}}/security/access-control/api/#reserved-and-hidden-resources) using `internal_users.yml`, found in `/config/opensearch-security/` directory. 1. You run `securityadmin.sh` again to load the new reserved user into the index. 1. You lose all ten users that you created using the REST API. diff --git a/_security/configuration/yaml.md b/_security/configuration/yaml.md index 4bcb8b0460..1686c8332e 100644 --- a/_security/configuration/yaml.md +++ b/_security/configuration/yaml.md @@ -17,7 +17,7 @@ The approach we recommend for using the YAML files is to first configure [reserv ## action_groups.yml -This file contains any initial action groups that you want to add to the Security plugin. +This file contains any role mappings required for your security configuration. You can find the `role_mapping.yml` file in `/config/opensearch-security/roles_mapping.yml`. Aside from some metadata, the default file is empty, because the Security plugin has a number of static action groups that it adds automatically. These static action groups cover a wide variety of use cases and are a great way to get started with the plugin. @@ -43,6 +43,8 @@ _meta: You can use `allowlist.yml` to add any endpoints and HTTP requests to a list of allowed endpoints and requests. If enabled, all users except the super admin are allowed access to only the specified endpoints and HTTP requests, and all other HTTP requests associated with the endpoint are denied. For example, if GET `_cluster/settings` is added to the allow list, users cannot submit PUT requests to `_cluster/settings` to update cluster settings. +You can find the `allowlist.yml` file in `/config/opensearch-security/allowlist.yml`. + Note that while you can configure access to endpoints this way, for most cases, it is still best to configure permissions using the Security plugin's users and roles, which have more granular settings. ```yml @@ -92,7 +94,7 @@ requests: # Only allow GET requests to /sample-index1/_doc/1 and /sample-index2/ ## internal_users.yml -This file contains any initial users that you want to add to the Security plugin's internal user database. +This file contains any initial users that you want to add to the Security plugin's internal user database. You can find this file in ``/config/opensearch-security/internal_users.yml`. The file format requires a hashed password. To generate one, run `plugins/opensearch-security/tools/hash.sh -p `. If you decide to keep any of the demo users, *change their passwords* and re-run [securityadmin.sh]({{site.url}}{{site.baseurl}}/security/configuration/security-admin/) to apply the new passwords. @@ -313,7 +315,7 @@ admin_tenant: ## opensearch.yml -In addition to many OpenSearch settings, this file contains paths to TLS certificates and their attributes, such as distinguished names and trusted certificate authorities. +In addition to many OpenSearch settings, the `opensearch.yml` file contains paths to TLS certificates and their attributes, such as distinguished names and trusted certificate authorities. You can find this file in `/config/`. ```yml plugins.security.ssl.transport.pemcert_filepath: esnode.pem From ea43da8dfa20bcd06161ddf1cc127fb24bf5e254 Mon Sep 17 00:00:00 2001 From: David Venable Date: Thu, 5 Sep 2024 11:33:47 -0500 Subject: [PATCH 024/111] Corrects the Data Prepper roadmap URL. (#8178) The original Data Prepper roadmap was a project linked to the repository. GitHub has been removing these and migrated it to a project linked to the organization instead. With this change on GitHub, the old URL became invalid and now shows a 404 page. Now commit is updating all occurrences of the old URL to use the new URL. Signed-off-by: David Venable --- _data-prepper/common-use-cases/log-analytics.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/_data-prepper/common-use-cases/log-analytics.md b/_data-prepper/common-use-cases/log-analytics.md index 30a021b101..ceb26ff5b7 100644 --- a/_data-prepper/common-use-cases/log-analytics.md +++ b/_data-prepper/common-use-cases/log-analytics.md @@ -147,6 +147,6 @@ The following is an example `fluent-bit.conf` file with SSL and basic authentica See the [Data Prepper Log Ingestion Demo Guide](https://github.com/opensearch-project/data-prepper/blob/main/examples/log-ingestion/README.md) for a specific example of Apache log ingestion from `FluentBit -> Data Prepper -> OpenSearch` running through Docker. -In the future, Data Prepper will offer additional sources and processors that will make more complex log analytics pipelines available. Check out the [Data Prepper Project Roadmap](https://github.com/opensearch-project/data-prepper/projects/1) to see what is coming. +In the future, Data Prepper will offer additional sources and processors that will make more complex log analytics pipelines available. Check out the [Data Prepper Project Roadmap](https://github.com/orgs/opensearch-project/projects/221) to see what is coming. If there is a specific source, processor, or sink that you would like to include in your log analytics workflow and is not currently on the roadmap, please bring it to our attention by creating a GitHub issue. Additionally, if you are interested in contributing to Data Prepper, see our [Contributing Guidelines](https://github.com/opensearch-project/data-prepper/blob/main/CONTRIBUTING.md) as well as our [developer guide](https://github.com/opensearch-project/data-prepper/blob/main/docs/developer_guide.md) and [plugin development guide](https://github.com/opensearch-project/data-prepper/blob/main/docs/plugin_development.md). From 12d82fafd0bd04b1b257901a8654cc286e0d5521 Mon Sep 17 00:00:00 2001 From: kolchfa-aws <105444904+kolchfa-aws@users.noreply.github.com> Date: Thu, 5 Sep 2024 16:26:51 -0400 Subject: [PATCH 025/111] Fix type in query insights documentation (#8182) Signed-off-by: kolchfa-aws <105444904+kolchfa-aws@users.noreply.github.com> --- _observing-your-data/query-insights/index.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/_observing-your-data/query-insights/index.md b/_observing-your-data/query-insights/index.md index 549371240f..b929e51491 100644 --- a/_observing-your-data/query-insights/index.md +++ b/_observing-your-data/query-insights/index.md @@ -8,7 +8,7 @@ has_toc: false # Query insights -To monitor and analyze the search queries within your OpenSearch clusterQuery information, you can obtain query insights. With minimal performance impact, query insights features aim to provide comprehensive insights into search query execution, enabling you to better understand search query characteristics, patterns, and system behavior during query execution stages. Query insights facilitate enhanced detection, diagnosis, and prevention of query performance issues, ultimately improving query processing performance, user experience, and overall system resilience. +To monitor and analyze the search queries within your OpenSearch cluster, you can obtain query insights. With minimal performance impact, query insights features aim to provide comprehensive insights into search query execution, enabling you to better understand search query characteristics, patterns, and system behavior during query execution stages. Query insights facilitate enhanced detection, diagnosis, and prevention of query performance issues, ultimately improving query processing performance, user experience, and overall system resilience. Typical use cases for query insights features include the following: From ad0d76ef42ec7404592420aec48cdb53103a30c5 Mon Sep 17 00:00:00 2001 From: Melissa Vagi Date: Thu, 5 Sep 2024 16:43:47 -0600 Subject: [PATCH 026/111] Delete graphs and copy edit (#8188) * Delete graphs and copy edit Signed-off-by: Melissa Vagi * Delete graphs and copy edit Signed-off-by: Melissa Vagi * Delete graphs and copy edit Signed-off-by: Melissa Vagi --------- Signed-off-by: Melissa Vagi --- _dashboards/query-workbench.md | 46 +++++++++------------------------- 1 file changed, 12 insertions(+), 34 deletions(-) diff --git a/_dashboards/query-workbench.md b/_dashboards/query-workbench.md index 8fe41afcdf..700d6a7340 100644 --- a/_dashboards/query-workbench.md +++ b/_dashboards/query-workbench.md @@ -8,19 +8,14 @@ redirect_from: # Query Workbench -Query Workbench is a tool within OpenSearch Dashboards. You can use Query Workbench to run on-demand [SQL]({{site.url}}{{site.baseurl}}/search-plugins/sql/sql/index/) and [PPL]({{site.url}}{{site.baseurl}}/search-plugins/sql/ppl/index/) queries, translate queries into their equivalent REST API calls, and view and save results in different [response formats]({{site.url}}{{site.baseurl}}/search-plugins/sql/response-formats/). +You can use Query Workbench in OpenSearch Dashboards to run on-demand [SQL]({{site.url}}{{site.baseurl}}/search-plugins/sql/sql/index/) and [PPL]({{site.url}}{{site.baseurl}}/search-plugins/sql/ppl/index/) queries, translate queries into their equivalent REST API calls, and view and save results in different [response formats]({{site.url}}{{site.baseurl}}/search-plugins/sql/response-formats/). -A view of the Query Workbench interface within OpenSearch Dashboards is shown in the following image. - -Query Workbench interface within OpenSearch Dashboards - -## Prerequisites - -Before getting started, make sure you have [indexed your data]({{site.url}}{{site.baseurl}}/im-plugin/index/). +Query Workbench does not support delete or update operations through SQL or PPL. Access to data is read-only. +{: .important} -For this tutorial, you can index the following sample documents. Alternatively, you can use the [OpenSearch Playground](https://playground.opensearch.org/app/opensearch-query-workbench#/), which has preloaded indexes that you can use to try out Query Workbench. +## Prerequisites -To index sample documents, send the following [Bulk API]({{site.url}}{{site.baseurl}}/api-reference/document-apis/bulk/) request: +Before getting started with this tutorial, index the sample documents by sending the following [Bulk API]({{site.url}}{{site.baseurl}}/api-reference/document-apis/bulk/) request: ```json PUT accounts/_bulk?refresh @@ -35,9 +30,11 @@ PUT accounts/_bulk?refresh ``` {% include copy-curl.html %} -## Running SQL queries within Query Workbench +See [Managing indexes]({{site.url}}{{site.baseurl}}/im-plugin/index/) to learn about indexing your own data. -Follow these steps to learn how to run SQL queries against your OpenSearch data using Query Workbench: +## Running SQL queries within Query Workbench + + The following steps guide you through running SQL queries against OpenSearch data: 1. Access Query Workbench. - To access Query Workbench, go to OpenSearch Dashboards and choose **OpenSearch Plugins** > **Query Workbench** from the main menu. @@ -64,23 +61,15 @@ Follow these steps to learn how to run SQL queries against your OpenSearch data 3. View the results. - View the results in the **Results** pane, which presents the query output in tabular format. You can filter and download the results as needed. - The following image shows the query editor pane and results pane for the preceding SQL query: - - Query Workbench SQL query input and results output panes - 4. Clear the query editor. - Select the **Clear** button to clear the query editor and run a new query. 5. Examine how the query is processed. - - Select the **Explain** button to examine how OpenSearch processes the query, including the steps involved and order of operations. - - The following image shows the explanation of the SQL query that was run in step 2. - - Query Workbench SQL query explanation pane + - Select the **Explain** button to examine how OpenSearch processes the query, including the steps involved and order of operations. ## Running PPL queries within Query Workbench -Follow these steps to learn how to run PPL queries against your OpenSearch data using Query Workbench: +Follow these steps to learn how to run PPL queries against OpenSearch data: 1. Access Query Workbench. - To access Query Workbench, go to OpenSearch Dashboards and choose **OpenSearch Plugins** > **Query Workbench** from the main menu. @@ -100,19 +89,8 @@ Follow these steps to learn how to run PPL queries against your OpenSearch data 3. View the results. - View the results in the **Results** pane, which presents the query output in tabular format. - The following image shows the query editor pane and results pane for the PPL query that was run in step 2: - - Query Workbench PPL query input and results output panes - 4. Clear the query editor. - Select the **Clear** button to clear the query editor and run a new query. 5. Examine how the query is processed. - - Select the **Explain** button to examine how OpenSearch processes the query, including the steps involved and order of operations. - - The following image shows the explanation of the PPL query that was run in step 2. - - Query Workbench PPL query explanation pane - -Query Workbench does not support delete or update operations through SQL or PPL. Access to data is read-only. -{: .important} \ No newline at end of file + - Select the **Explain** button to examine how OpenSearch processes the query, including the steps involved and order of operations. From 62a4c18a3ea64f8d0811c2cedcee8fbbe69e5b05 Mon Sep 17 00:00:00 2001 From: jazzl0ver Date: Fri, 6 Sep 2024 22:20:03 +0300 Subject: [PATCH 027/111] user accounts manipulation audit example (#8158) * user accounts manipulation audit example Signed-off-by: jazzl0ver * user accounts manipulation audit example Signed-off-by: jazzl0ver * user accounts manipulation audit example Signed-off-by: jazzl0ver * Update _security/audit-logs/index.md Co-authored-by: Craig Perkins Signed-off-by: jazzl0ver * Update _security/audit-logs/index.md Signed-off-by: Naarcha-AWS <97990722+Naarcha-AWS@users.noreply.github.com> --------- Signed-off-by: jazzl0ver Signed-off-by: Naarcha-AWS <97990722+Naarcha-AWS@users.noreply.github.com> Co-authored-by: Craig Perkins Co-authored-by: Naarcha-AWS <97990722+Naarcha-AWS@users.noreply.github.com> --- _security/audit-logs/index.md | 33 +++++++++++++++++++++++++++++++++ 1 file changed, 33 insertions(+) diff --git a/_security/audit-logs/index.md b/_security/audit-logs/index.md index becb001ec0..8eeea33447 100644 --- a/_security/audit-logs/index.md +++ b/_security/audit-logs/index.md @@ -224,3 +224,36 @@ plugins.security.audit.config.threadpool.max_queue_len: 100000 To disable audit logs after they've been enabled, remove the `plugins.security.audit.type: internal_opensearch` setting from `opensearch.yml`, or switch off the **Enable audit logging** check box in OpenSearch Dashboards. +## Audit user account manipulation + +To enable audit logging on changes to a security index, such as changes to roles mappings and role creation or deletion, use the following settings in the `compliance:` portion of the audit log configuration, as shown in the following example: + +``` +_meta: + type: "audit" + config_version: 2 + +config: + # enable/disable audit logging + enabled: true + + ... + + + compliance: + # enable/disable compliance + enabled: true + + # Log updates to internal security changes + internal_config: true + + # Log only metadata of the document for write events + write_metadata_only: false + + # Log only diffs for document updates + write_log_diffs: true + + # List of indices to watch for write events. Wildcard patterns are supported + # write_watched_indices: ["twitter", "logs-*"] + write_watched_indices: [".opendistro_security"] +``` From b79eed39e9c8ea933d64b80a23caad14ca12941c Mon Sep 17 00:00:00 2001 From: kolchfa-aws <105444904+kolchfa-aws@users.noreply.github.com> Date: Mon, 9 Sep 2024 15:18:33 -0400 Subject: [PATCH 028/111] Fix heading levels in geoshape query documentation (#8198) * Fix heading levels in geoshape query documentation Signed-off-by: Fanit Kolchina * One more Signed-off-by: Fanit Kolchina --------- Signed-off-by: Fanit Kolchina --- _query-dsl/geo-and-xy/geoshape.md | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/_query-dsl/geo-and-xy/geoshape.md b/_query-dsl/geo-and-xy/geoshape.md index 42948666f4..8acc691c3a 100644 --- a/_query-dsl/geo-and-xy/geoshape.md +++ b/_query-dsl/geo-and-xy/geoshape.md @@ -25,15 +25,15 @@ Relation | Description | Supporting geographic field type ## Defining the shape in a geoshape query -You can define the shape to filter documents in a geoshape query either by providing a new shape definition at query time or by referencing the name of a shape pre-indexed in another index. +You can define the shape to filter documents in a geoshape query either by [providing a new shape definition at query time](#using-a-new-shape-definition) or by [referencing the name of a shape pre-indexed in another index](#using-a-pre-indexed-shape-definition). -### Using a new shape definition +## Using a new shape definition To provide a new shape to a geoshape query, define it in the `geo_shape` field. You must define the geoshape in [GeoJSON format](https://geojson.org/). The following example illustrates searching for documents containing geoshapes that match a geoshape defined at query time. -#### Step 1: Create an index +### Step 1: Create an index First, create an index and map the `location` field as a `geo_shape`: @@ -422,7 +422,7 @@ GET /testindex/_search Geoshape queries whose geometry collection contains a linestring or a multilinestring do not support the `WITHIN` relation. {: .note} -### Using a pre-indexed shape definition +## Using a pre-indexed shape definition When constructing a geoshape query, you can also reference the name of a shape pre-indexed in another index. Using this method, you can define a geoshape at index time and refer to it by name at search time. From 9435b466de5f1512b25dd5b75fd171e161514048 Mon Sep 17 00:00:00 2001 From: kolchfa-aws <105444904+kolchfa-aws@users.noreply.github.com> Date: Tue, 10 Sep 2024 15:04:18 -0400 Subject: [PATCH 029/111] Remove ODFE color scheme (#8208) Signed-off-by: Fanit Kolchina --- _sass/color_schemes/odfe.scss | 75 ----------------------------------- 1 file changed, 75 deletions(-) delete mode 100644 _sass/color_schemes/odfe.scss diff --git a/_sass/color_schemes/odfe.scss b/_sass/color_schemes/odfe.scss deleted file mode 100644 index f9b2ca02ba..0000000000 --- a/_sass/color_schemes/odfe.scss +++ /dev/null @@ -1,75 +0,0 @@ -// -// Brand colors -// - -$white: #FFFFFF; - -$grey-dk-300: #241F21; // Error -$grey-dk-250: mix(white, $grey-dk-300, 12.5%); -$grey-dk-200: mix(white, $grey-dk-300, 25%); -$grey-dk-100: mix(white, $grey-dk-300, 50%); -$grey-dk-000: mix(white, $grey-dk-300, 75%); - -$grey-lt-300: #DBDBDB; // Cloud -$grey-lt-200: mix(white, $grey-lt-300, 25%); -$grey-lt-100: mix(white, $grey-lt-300, 50%); -$grey-lt-000: mix(white, $grey-lt-300, 75%); - -$blue-300: #00007C; // Meta -$blue-200: mix(white, $blue-300, 25%); -$blue-100: mix(white, $blue-300, 50%); -$blue-000: mix(white, $blue-300, 75%); - -$purple-300: #9600FF; // Prpl -$purple-200: mix(white, $purple-300, 25%); -$purple-100: mix(white, $purple-300, 50%); -$purple-000: mix(white, $purple-300, 75%); - -$green-300: #00671A; // Element -$green-200: mix(white, $green-300, 25%); -$green-100: mix(white, $green-300, 50%); -$green-000: mix(white, $green-300, 75%); - -$yellow-300: #FFDF00; // Kan-Banana -$yellow-200: mix(white, $yellow-300, 25%); -$yellow-100: mix(white, $yellow-300, 50%); -$yellow-000: mix(white, $yellow-300, 75%); - -$red-300: #BD145A; // Ruby -$red-200: mix(white, $red-300, 25%); -$red-100: mix(white, $red-300, 50%); -$red-000: mix(white, $red-300, 75%); - -$blue-lt-300: #0000FF; // Cascade -$blue-lt-200: mix(white, $blue-lt-300, 25%); -$blue-lt-100: mix(white, $blue-lt-300, 50%); -$blue-lt-000: mix(white, $blue-lt-300, 75%); - -/* -Other, unused brand colors - -Float #2797F4 -Firewall #0FF006B -Hyper Pink #F261A1 -Cluster #ED20EB -Back End #808080 -Python #25EE5C -Warm Node #FEA501 -*/ - -$body-background-color: $white; -$sidebar-color: $grey-lt-000; -$code-background-color: $grey-lt-000; - -$body-text-color: $grey-dk-200; -$body-heading-color: $grey-dk-300; -$nav-child-link-color: $grey-dk-200; -$link-color: mix(black, $blue-lt-300, 37.5%); -$btn-primary-color: $purple-300; -$base-button-color: $grey-lt-000; - -// $border-color: $grey-dk-200; -// $search-result-preview-color: $grey-dk-000; -// $search-background-color: $grey-dk-250; -// $table-background-color: $grey-dk-250; -// $feedback-color: darken($sidebar-color, 3%); From 41f62e09e8c0576af0f40df06714fc505b3747b7 Mon Sep 17 00:00:00 2001 From: anand kumar rai Date: Wed, 11 Sep 2024 20:19:54 +0530 Subject: [PATCH 030/111] Add documentation for max_number_processors (#8157) * Add documentation for max_number_processors Signed-off-by: Rai * Refined the documentation Signed-off-by: Rai * Doc review Signed-off-by: Melissa Vagi * Update _ingest-pipelines/processors/index-processors.md Co-authored-by: Nathan Bower Signed-off-by: Melissa Vagi * Update _ingest-pipelines/processors/index-processors.md Co-authored-by: Nathan Bower Signed-off-by: Melissa Vagi --------- Signed-off-by: Rai Signed-off-by: Melissa Vagi Co-authored-by: Rai Co-authored-by: Melissa Vagi Co-authored-by: Nathan Bower --- _ingest-pipelines/processors/index-processors.md | 6 ++++++ 1 file changed, 6 insertions(+) diff --git a/_ingest-pipelines/processors/index-processors.md b/_ingest-pipelines/processors/index-processors.md index 0e1ee1e114..9628a16728 100644 --- a/_ingest-pipelines/processors/index-processors.md +++ b/_ingest-pipelines/processors/index-processors.md @@ -69,6 +69,12 @@ Processor type | Description `urldecode` | Decodes a string from URL-encoded format. `user_agent` | Extracts details from the user agent sent by a browser to its web requests. +## Processor limit settings + +You can limit the number of ingest processors using the cluster setting `cluster.ingest.max_number_processors`. The total number of processors includes both the number of processors and the number of [`on_failure`]({{site.url}}{{site.baseurl}}/ingest-pipelines/pipeline-failures/) processors. + +The default value for `cluster.ingest.max_number_processors` is `Integer.MAX_VALUE`. Adding a higher number of processors than the value configured in `cluster.ingest.max_number_processors` will throw an `IllegalStateException`. + ## Batch-enabled processors Some processors support batch ingestion---they can process multiple documents at the same time as a batch. These batch-enabled processors usually provide better performance when using batch processing. For batch processing, use the [Bulk API]({{site.url}}{{site.baseurl}}/api-reference/document-apis/bulk/) and provide a `batch_size` parameter. All batch-enabled processors have a batch mode and a single-document mode. When you ingest documents using the `PUT` method, the processor functions in single-document mode and processes documents in series. Currently, only the `text_embedding` and `sparse_encoding` processors are batch enabled. All other processors process documents one at a time. From ce3b0fecb049d427f2057e0f9c44291109efaf8c Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?=C4=90=E1=BB=97=20Tr=E1=BB=8Dng=20H=E1=BA=A3i?= <41283691+hainenber@users.noreply.github.com> Date: Wed, 11 Sep 2024 22:54:33 +0700 Subject: [PATCH 031/111] Allow copy as curl for Query DSL example in "Updating documents" section (#8213) Signed-off-by: hainenber --- _getting-started/communicate.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/_getting-started/communicate.md b/_getting-started/communicate.md index 9960f63b2c..3472270c30 100644 --- a/_getting-started/communicate.md +++ b/_getting-started/communicate.md @@ -200,7 +200,7 @@ PUT /students/_doc/1 "address": "123 Main St." } ``` -{% include copy.html %} +{% include copy-curl.html %} Alternatively, you can update parts of a document by calling the Update Document API: From d91e281d15090e1d6e79917454bc8620f9508a08 Mon Sep 17 00:00:00 2001 From: kolchfa-aws <105444904+kolchfa-aws@users.noreply.github.com> Date: Wed, 11 Sep 2024 12:32:30 -0400 Subject: [PATCH 032/111] Explicitly insert text that links PR with issue in PR template (#8218) Signed-off-by: Fanit Kolchina --- .github/PULL_REQUEST_TEMPLATE.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/.github/PULL_REQUEST_TEMPLATE.md b/.github/PULL_REQUEST_TEMPLATE.md index 21b6fbfea6..fd4213b7e5 100644 --- a/.github/PULL_REQUEST_TEMPLATE.md +++ b/.github/PULL_REQUEST_TEMPLATE.md @@ -2,7 +2,7 @@ _Describe what this change achieves._ ### Issues Resolved -_List any issues this PR will resolve, e.g. Closes [...]._ +Closes #[_insert issue number_] ### Version _List the OpenSearch version to which this PR applies, e.g. 2.14, 2.12--2.14, or all._ From 1c3e4361dd9a9d9436aa2668c8b8abcd8fe619c0 Mon Sep 17 00:00:00 2001 From: Ganesh Krishna Ramadurai Date: Wed, 11 Sep 2024 09:57:08 -0700 Subject: [PATCH 033/111] Doc update for concurrent search (#8181) * Doc update for concurrent search Signed-off-by: Ganesh Ramadurai * Apply suggestions from code review Co-authored-by: kolchfa-aws <105444904+kolchfa-aws@users.noreply.github.com> Signed-off-by: Ganesh Krishna Ramadurai * Apply suggestions from code review Co-authored-by: Nathan Bower Signed-off-by: kolchfa-aws <105444904+kolchfa-aws@users.noreply.github.com> --------- Signed-off-by: Ganesh Ramadurai Signed-off-by: Ganesh Krishna Ramadurai Signed-off-by: kolchfa-aws <105444904+kolchfa-aws@users.noreply.github.com> Co-authored-by: kolchfa-aws <105444904+kolchfa-aws@users.noreply.github.com> Co-authored-by: Nathan Bower --- _search-plugins/concurrent-segment-search.md | 97 ++++++++++++++++++-- 1 file changed, 91 insertions(+), 6 deletions(-) diff --git a/_search-plugins/concurrent-segment-search.md b/_search-plugins/concurrent-segment-search.md index cbbb993ac9..80614e2fff 100644 --- a/_search-plugins/concurrent-segment-search.md +++ b/_search-plugins/concurrent-segment-search.md @@ -22,6 +22,8 @@ Without concurrent segment search, Lucene executes a request sequentially across ## Enabling concurrent segment search at the index or cluster level +Starting with OpenSearch version 2.17, you can use the `search.concurrent_segment_search.mode` setting to configure concurrent segment search on your cluster. The existing `search.concurrent_segment_search.enabled` setting will be deprecated in future version releases in favor of the new setting. + By default, concurrent segment search is disabled on the cluster. You can enable concurrent segment search at two levels: - Cluster level @@ -30,8 +32,37 @@ By default, concurrent segment search is disabled on the cluster. You can enable The index-level setting takes priority over the cluster-level setting. Thus, if the cluster setting is enabled but the index setting is disabled, then concurrent segment search will be disabled for that index. Because of this, the index-level setting is not evaluated unless it is explicitly set, regardless of the default value configured for the setting. You can retrieve the current value of the index-level setting by calling the [Index Settings API]({{site.url}}{{site.baseurl}}/api-reference/index-apis/get-settings/) and omitting the `?include_defaults` query parameter. {: .note} -To enable concurrent segment search for all indexes in the cluster, set the following dynamic cluster setting: +Both the cluster- and index-level `search.concurrent_segment_search.mode` settings accept the following values: + +- `all`: Enables concurrent segment search across all search requests. This is equivalent to setting `search.concurrent_segment_search.enabled` to `true`. + +- `none`: Disables concurrent segment search for all search requests, effectively turning off the feature. This is equivalent to setting `search.concurrent_segment_search.enabled` to `false`. This is the **default** behavior. + +- `auto`: In this mode, OpenSearch will use the pluggable _concurrent search decider_ to decide whether to use a concurrent or sequential path for the search request based on the query evaluation and the presence of aggregations in the request. By default, if there are no deciders configured by any plugin, then the decision to use concurrent search will be made based on the presence of aggregations in the request. For more information about the pluggable decider semantics, see [Pluggable concurrent search deciders](#pluggable-concurrent-search-deciders-concurrentsearchrequestdecider). + +To enable concurrent segment search for all search requests across every index in the cluster, send the following request: +```json +PUT _cluster/settings +{ + "persistent":{ + "search.concurrent_segment_search.mode": "all" + } +} +``` +{% include copy-curl.html %} + +To enable concurrent segment search for all search requests on a particular index, specify the index name in the endpoint: + +```json +PUT /_settings +{ + "index.search.concurrent_segment_search.mode": "all" +} +``` +{% include copy-curl.html %} + +You can continue to use the existing `search.concurrent_segment_search.enabled` setting to enable concurrent segment search for all indexes in the cluster as follows: ```json PUT _cluster/settings { @@ -52,6 +83,35 @@ PUT /_settings ``` {% include copy-curl.html %} + +When evaluating whether concurrent segment search is enabled on a cluster, the `search.concurrent_segment_search.mode` setting takes precedence over the `search.concurrent_segment_search.enabled` setting. +If the `search.concurrent_segment_search.mode` setting is not explicitly set, then the `search.concurrent_segment_search.enabled` setting will be evaluated to determine whether to enable concurrent segment search. + +When upgrading a cluster from an earlier version that specifies the older `search.concurrent_segment_search.enabled` setting, this setting will continue to be honored. However, once the `search.concurrent_segment_search.mode` is set, it will override the previous setting, enabling or disabling concurrent search based on the specified mode. +We recommend setting `search.concurrent_segment_search.enabled` to `null` on your cluster once you configure `search.concurrent_segment_search.mode`: + +```json +PUT _cluster/settings +{ + "persistent":{ + "search.concurrent_segment_search.enabled": null + } +} +``` +{% include copy-curl.html %} + +To disable the old setting for a particular index, specify the index name in the endpoint: +```json +PUT /_settings +{ + "index.search.concurrent_segment_search.enabled": null +} +``` +{% include copy-curl.html %} + + + + ## Slicing mechanisms You can choose one of two available mechanisms for assigning segments to slices: the default [Lucene mechanism](#the-lucene-mechanism) or the [max slice count mechanism](#the-max-slice-count-mechanism). @@ -66,7 +126,10 @@ The _max slice count_ mechanism is an alternative slicing mechanism that uses a ### Setting the slicing mechanism -By default, concurrent segment search uses the Lucene mechanism to calculate the number of slices for each shard-level request. To use the max slice count mechanism instead, configure the `search.concurrent.max_slice_count` cluster setting: +By default, concurrent segment search uses the Lucene mechanism to calculate the number of slices for each shard-level request. +To use the max slice count mechanism instead, you can set the slice count for concurrent segment search at either the cluster level or index level. + +To configure the slice count for all indexes in a cluster, use the following dynamic cluster setting: ```json PUT _cluster/settings @@ -78,7 +141,17 @@ PUT _cluster/settings ``` {% include copy-curl.html %} -The `search.concurrent.max_slice_count` setting can take the following valid values: +To configure the slice count for a particular index, specify the index name in the endpoint: + +```json +PUT /_settings +{ + "index.search.concurrent.max_slice_count": 2 +} +``` +{% include copy-curl.html %} + +Both the cluster- and index-level `search.concurrent.max_slice_count` settings can take the following valid values: - `0`: Use the default Lucene mechanism. - Positive integer: Use the max target slice count mechanism. Usually, a value between 2 and 8 should be sufficient. @@ -117,8 +190,20 @@ Non-concurrent search calculates the document count error and returns it in the For more information about how `shard_size` can affect both `doc_count_error_upper_bound` and collected buckets, see [this GitHub issue](https://github.com/opensearch-project/OpenSearch/issues/11680#issuecomment-1885882985). -## Developer information: AggregatorFactory changes +## Developer information + +The following sections provide additional information for developers. + +### AggregatorFactory changes + +Because of implementation details, not all aggregator types can support concurrent segment search. To accommodate this, we have introduced a [`supportsConcurrentSegmentSearch()`](https://github.com/opensearch-project/OpenSearch/blob/2.x/server/src/main/java/org/opensearch/search/aggregations/AggregatorFactory.java#L123) method in the `AggregatorFactory` class to indicate whether a given aggregation type supports concurrent segment search. By default, this method returns `false`. Any aggregator that needs to support concurrent segment search must override this method in its own factory implementation. + +To ensure that a custom plugin-based `Aggregator` implementation functions with the concurrent search path, plugin developers can verify their implementation with concurrent search enabled and then update the plugin to override the [`supportsConcurrentSegmentSearch()`](https://github.com/opensearch-project/OpenSearch/blob/2.x/server/src/main/java/org/opensearch/search/aggregations/AggregatorFactory.java#L123) method to return `true`. + +### Pluggable concurrent search deciders: ConcurrentSearchRequestDecider -Because of implementation details, not all aggregator types can support concurrent segment search. To accommodate this, we have introduced a [`supportsConcurrentSegmentSearch()`](https://github.com/opensearch-project/OpenSearch/blob/bb38ed4836496ac70258c2472668325a012ea3ed/server/src/main/java/org/opensearch/search/aggregations/AggregatorFactory.java#L121) method in the `AggregatorFactory` class to indicate whether a given aggregation type supports concurrent segment search. By default, this method returns `false`. Any aggregator that needs to support concurrent segment search must override this method in its own factory implementation. +Introduced 2.17 +{: .label .label-purple } -To ensure that a custom plugin-based `Aggregator` implementation works with the concurrent search path, plugin developers can verify their implementation with concurrent search enabled and then update the plugin to override the [`supportsConcurrentSegmentSearch()`](https://github.com/opensearch-project/OpenSearch/blob/bb38ed4836496ac70258c2472668325a012ea3ed/server/src/main/java/org/opensearch/search/aggregations/AggregatorFactory.java#L121) method to return `true`. +Plugin developers can customize the concurrent search decision-making for `auto` mode by extending [`ConcurrentSearchRequestDecider`](https://github.com/opensearch-project/OpenSearch/blob/2.x/server/src/main/java/org/opensearch/search/deciders/ConcurrentSearchRequestDecider.java) and registering its factory through [`SearchPlugin#getConcurrentSearchRequestFactories()`](https://github.com/opensearch-project/OpenSearch/blob/2.x/server/src/main/java/org/opensearch/plugins/SearchPlugin.java#L148). The deciders are evaluated only if a request does not belong to any category listed in the [Limitations](#limitations) and [Other considerations](#other-considerations) sections. For more information about the decider implementation, see [the corresponding GitHub issue](https://github.com/opensearch-project/OpenSearch/issues/15259). +The search request is parsed using a `QueryBuilderVisitor`, which calls the [`ConcurrentSearchRequestDecider#evaluateForQuery()`](https://github.com/opensearch-project/OpenSearch/blob/2.x/server/src/main/java/org/opensearch/search/deciders/ConcurrentSearchRequestDecider.java#L36) method of all the configured deciders for every node of the `QueryBuilder` tree in the search request. The final concurrent search decision is obtained by combining the decision from each decider returned by the [`ConcurrentSearchRequestDecider#getConcurrentSearchDecision()`](https://github.com/opensearch-project/OpenSearch/blob/2.x/server/src/main/java/org/opensearch/search/deciders/ConcurrentSearchRequestDecider.java#L44) method. \ No newline at end of file From 632e8f2de0fbb7d357bb4e96bcd6caa5c9a395ac Mon Sep 17 00:00:00 2001 From: Sooraj Sinha <81695996+soosinha@users.noreply.github.com> Date: Wed, 11 Sep 2024 22:32:25 +0530 Subject: [PATCH 034/111] Add new settings for remote publication (#8176) * Add new settings for remote publication Signed-off-by: Sooraj Sinha * Update remote-cluster-state.md Signed-off-by: Naarcha-AWS <97990722+Naarcha-AWS@users.noreply.github.com> * Apply suggestions from code review Signed-off-by: Naarcha-AWS <97990722+Naarcha-AWS@users.noreply.github.com> * remove redundant lines Signed-off-by: Naarcha-AWS <97990722+Naarcha-AWS@users.noreply.github.com> --------- Signed-off-by: Sooraj Sinha Signed-off-by: Naarcha-AWS <97990722+Naarcha-AWS@users.noreply.github.com> Co-authored-by: Naarcha-AWS <97990722+Naarcha-AWS@users.noreply.github.com> --- .../remote-store/remote-cluster-state.md | 21 ++++++++++++------- 1 file changed, 14 insertions(+), 7 deletions(-) diff --git a/_tuning-your-cluster/availability-and-recovery/remote-store/remote-cluster-state.md b/_tuning-your-cluster/availability-and-recovery/remote-store/remote-cluster-state.md index d967aca914..03cd1716f0 100644 --- a/_tuning-your-cluster/availability-and-recovery/remote-store/remote-cluster-state.md +++ b/_tuning-your-cluster/availability-and-recovery/remote-store/remote-cluster-state.md @@ -67,10 +67,14 @@ The remote cluster state functionality has the following limitations: ## Remote cluster state publication - The cluster manager node processes updates to the cluster state. It then publishes the updated cluster state through the local transport layer to all of the follower nodes. With the `remote_store.publication` feature enabled, the cluster state is backed up to the remote store during every state update. The follower nodes can then fetch the state from the remote store directly, which reduces the overhead on the cluster manager node for publication. -To enable the feature flag for the `remote_store.publication` feature, follow the steps in the [experimental feature flag documentation]({{site.url}}{{site.baseurl}}/install-and-configure/configuring-opensearch/experimental/). +To enable this feature, configure the following setting in `opensearch.yml`: + +```yml +# Enable Remote cluster state publication +cluster.remote_store.publication.enabled: true +``` Enabling the setting does not change the publication flow, and follower nodes will not send acknowledgements back to the cluster manager node until they download the updated cluster state from the remote store. @@ -89,8 +93,11 @@ You do not have to use different remote store repositories for state and routing To configure remote publication, use the following cluster settings. -Setting | Default | Description -:--- | :--- | :--- -`cluster.remote_store.state.read_timeout` | 20s | The amount of time to wait for remote state download to complete on the follower node. -`cluster.remote_store.routing_table.path_type` | HASHED_PREFIX | The path type to be used for creating an index routing path in the blob store. Valid values are `FIXED`, `HASHED_PREFIX`, and `HASHED_INFIX`. -`cluster.remote_store.routing_table.path_hash_algo` | FNV_1A_BASE64 | The algorithm to be used for constructing the prefix or infix of the blob store path. This setting is applied if `cluster.remote_store.routing_table.path_type` is `hashed_prefix` or `hashed_infix`. Valid algorithm values are `FNV_1A_BASE64` and `FNV_1A_COMPOSITE_1`. +Setting | Default | Description +:--- |:---| :--- +`cluster.remote_store.state.read_timeout` | 20s | The amount of time to wait for the remote state download to complete on the follower node. +`cluster.remote_store.state.path.prefix` | "" (Empty string) | The fixed prefix to add to the index metadata files in the blob store. +`cluster.remote_store.index_metadata.path_type` | `HASHED_PREFIX` | The path type used for creating an index metadata path in the blob store. Valid values are `FIXED`, `HASHED_PREFIX`, and `HASHED_INFIX`. +`cluster.remote_store.index_metadata.path_hash_algo` | `FNV_1A_BASE64 ` | The algorithm that constructs the prefix or infix for the index metadata path in the blob store. This setting is applied if the ``cluster.remote_store.index_metadata.path_type` setting is `HASHED_PREFIX` or `HASHED_INFIX`. Valid algorithm values are `FNV_1A_BASE64` and `FNV_1A_COMPOSITE_1`. +`cluster.remote_store.routing_table.path.prefix` | "" (Empty string) | The fixed prefix to add for the index routing files in the blob store. + From 9b609c6146eb03811ecb209b02715fd513726be1 Mon Sep 17 00:00:00 2001 From: Anshu Agarwal Date: Wed, 11 Sep 2024 22:32:35 +0530 Subject: [PATCH 035/111] Add documentation changes for shallow snapshot v2 (#8207) * Add documentation changes for shallow snapshot Signed-off-by: Anshu Agarwal * Update create-repository.md Signed-off-by: Naarcha-AWS <97990722+Naarcha-AWS@users.noreply.github.com> * Apply suggestions from code review Signed-off-by: Naarcha-AWS <97990722+Naarcha-AWS@users.noreply.github.com> * Apply suggestions from code review Signed-off-by: Naarcha-AWS <97990722+Naarcha-AWS@users.noreply.github.com> * Update snapshot-interoperability.md Signed-off-by: Naarcha-AWS <97990722+Naarcha-AWS@users.noreply.github.com> * Apply suggestions from code review Signed-off-by: Naarcha-AWS <97990722+Naarcha-AWS@users.noreply.github.com> * Apply suggestions from code review Signed-off-by: Naarcha-AWS <97990722+Naarcha-AWS@users.noreply.github.com> * Apply suggestions from code review Co-authored-by: Nathan Bower Signed-off-by: Naarcha-AWS <97990722+Naarcha-AWS@users.noreply.github.com> --------- Signed-off-by: Anshu Agarwal Signed-off-by: Naarcha-AWS <97990722+Naarcha-AWS@users.noreply.github.com> Co-authored-by: Anshu Agarwal Co-authored-by: Naarcha-AWS <97990722+Naarcha-AWS@users.noreply.github.com> Co-authored-by: Nathan Bower --- _api-reference/snapshots/create-repository.md | 4 +- _api-reference/snapshots/create-snapshot.md | 3 +- .../remote-store/snapshot-interoperability.md | 42 ++++++++++++++++++- 3 files changed, 46 insertions(+), 3 deletions(-) diff --git a/_api-reference/snapshots/create-repository.md b/_api-reference/snapshots/create-repository.md index ca4c04114c..367aa3606a 100644 --- a/_api-reference/snapshots/create-repository.md +++ b/_api-reference/snapshots/create-repository.md @@ -38,7 +38,7 @@ Request parameters depend on the type of repository: `fs` or `s3`. ### Common parameters -The following table lists parameters that can be used with both the `fs` and `s3` repositories. +The following table lists parameters that can be used with both the `fs` and `s3` repositories. Request field | Description :--- | :--- @@ -54,6 +54,7 @@ Request field | Description `max_restore_bytes_per_sec` | The maximum rate at which snapshots restore. Default is 40 MB per second (`40m`). Optional. `max_snapshot_bytes_per_sec` | The maximum rate at which snapshots take. Default is 40 MB per second (`40m`). Optional. `remote_store_index_shallow_copy` | Boolean | Determines whether the snapshot of the remote store indexes are captured as a shallow copy. Default is `false`. +`shallow_snapshot_v2` | Boolean | Determines whether the snapshots of the remote store indexes are captured as a [shallow copy v2]({{site.url}}{{site.baseurl}}/tuning-your-cluster/availability-and-recovery/remote-store/snapshot-interoperability/#shallow-snapshot-v2). Default is `false`. `readonly` | Whether the repository is read-only. Useful when migrating from one cluster (`"readonly": false` when registering) to another cluster (`"readonly": true` when registering). Optional. @@ -73,6 +74,7 @@ Request field | Description `max_snapshot_bytes_per_sec` | The maximum rate at which snapshots take. Default is 40 MB per second (`40m`). Optional. `readonly` | Whether the repository is read-only. Useful when migrating from one cluster (`"readonly": false` when registering) to another cluster (`"readonly": true` when registering). Optional. `remote_store_index_shallow_copy` | Boolean | Whether the snapshot of the remote store indexes is captured as a shallow copy. Default is `false`. +`shallow_snapshot_v2` | Boolean | Determines whether the snapshots of the remote store indexes are captured as a [shallow copy v2]([shallow copy v2]({{site.url}}{{site.baseurl}}/tuning-your-cluster/availability-and-recovery/remote-store/snapshot-interoperability/#shallow-snapshot-v2). Default is `false`. `server_side_encryption` | Whether to encrypt snapshot files in the S3 bucket. This setting uses AES-256 with S3-managed keys. See [Protecting data using server-side encryption](https://docs.aws.amazon.com/AmazonS3/latest/dev/serv-side-encryption.html). Default is `false`. Optional. `storage_class` | Specifies the [S3 storage class](https://docs.aws.amazon.com/AmazonS3/latest/dev/storage-class-intro.html) for the snapshots files. Default is `standard`. Do not use the `glacier` and `deep_archive` storage classes. Optional. diff --git a/_api-reference/snapshots/create-snapshot.md b/_api-reference/snapshots/create-snapshot.md index d4c9ef8219..b35d1a1d0c 100644 --- a/_api-reference/snapshots/create-snapshot.md +++ b/_api-reference/snapshots/create-snapshot.md @@ -144,4 +144,5 @@ The snapshot definition is returned. | failures | array | Failures, if any, that occured during snapshot creation. | | shards | object | Total number of shards created along with number of successful and failed shards. | | state | string | Snapshot status. Possible values: `IN_PROGRESS`, `SUCCESS`, `FAILED`, `PARTIAL`. | -| remote_store_index_shallow_copy | Boolean | Whether the snapshot of the remote store indexes is captured as a shallow copy. Default is `false`. | \ No newline at end of file +| remote_store_index_shallow_copy | Boolean | Whether the snapshots of the remote store indexes is captured as a shallow copy. Default is `false`. | +| pinned_timestamp | long | A timestamp (in milliseconds) pinned by the snapshot for the implicit locking of remote store files referenced by the snapshot. | \ No newline at end of file diff --git a/_tuning-your-cluster/availability-and-recovery/remote-store/snapshot-interoperability.md b/_tuning-your-cluster/availability-and-recovery/remote-store/snapshot-interoperability.md index 0415af65f1..e93f504be3 100644 --- a/_tuning-your-cluster/availability-and-recovery/remote-store/snapshot-interoperability.md +++ b/_tuning-your-cluster/availability-and-recovery/remote-store/snapshot-interoperability.md @@ -27,7 +27,7 @@ PUT /_snapshot/snap_repo ``` {% include copy-curl.html %} -Once enabled, all requests using the [Snapshot API]({{site.url}}{{site.baseurl}}/api-reference/snapshots/index/) will remain the same for all snapshots. After the setting is enabled, we recommend not disabling the setting. Doing so could affect data durability. +Once enabled, all requests using the [Snapshot API]({{site.url}}{{site.baseurl}}/api-reference/snapshots/index/) will remain the same for all snapshots. Therefore, do not disable the shallow snapshot setting after it has been enabled because disabling the setting could affect data durability. ## Considerations @@ -37,3 +37,43 @@ Consider the following before using shallow copy snapshots: - All nodes in the cluster must use OpenSearch 2.10 or later to take advantage of shallow copy snapshots. - The `incremental` file count and size between the current snapshot and the last snapshot is `0` when using shallow copy snapshots. - Searchable snapshots are not supported inside shallow copy snapshots. + +## Shallow snapshot v2 + +Starting with OpenSearch 2.17, the shallow snapshot feature offers an improved version called `shallow snapshot v2`, which aims to makes snapshot operations more efficient and scalable by introducing the following enhancements: + +* Deterministic snapshot operations: Shallow snapshot v2 makes snapshot operations more deterministic, ensuring consistent and predictable behavior. +* Minimized cluster state updates: Shallow snapshot v2 minimizes the number of cluster state updates required during snapshot operations, reducing overhead and improving performance. +* Scalability: Shallow snapshot v2 allows snapshot operations to scale independently of the number of shards in the cluster, enabling better performance and efficiency for large datasets. + +Shallow snapshot v2 must be enabled separately from shallow copies. + +### Enabling shallow snapshot v2 + +To enable shallow snapshot v2, enable the following repository settings: + +- `remote_store_index_shallow_copy: true` +- `shallow_snapshot_v2: true` + +The following example request creates a shallow snapshot v2 repository: + +```bash +PUT /_snapshot/snap_repo +{ +"type": "s3", +"settings": { +"bucket": "test-bucket", +"base_path": "daily-snaps", +"remote_store_index_shallow_copy": true, +"shallow_snapshot_v2": true +} +} +``` +{% include copy-curl.html %} + +### Limitations + +Shallow snapshot v2 has the following limitations: + +* Shallow snapshot v2 only supported for remote-backed indexes. +* All nodes in the cluster must use OpenSearch 2.17 or later to take advantage of shallow snapshot v2. From b41858a146abd7fa8f248f9457d3e8dee07ffba6 Mon Sep 17 00:00:00 2001 From: AntonEliatra Date: Wed, 11 Sep 2024 18:08:11 +0100 Subject: [PATCH 036/111] Add Ascii folding token filter (#7912) * adding asciifolding token filter page #7873 Signed-off-by: AntonEliatra * updating the naming Signed-off-by: AntonEliatra * updating as per PR comments Signed-off-by: AntonEliatra * updating the heading Signed-off-by: AntonEliatra * Updating details as per comments Signed-off-by: AntonEliatra * Updating details as per comments Signed-off-by: AntonEliatra * Updating details as per comments Signed-off-by: AntonEliatra * Apply suggestions from code review Co-authored-by: kolchfa-aws <105444904+kolchfa-aws@users.noreply.github.com> Signed-off-by: AntonEliatra * updating as per comments Signed-off-by: Anton Rubin * Apply suggestions from code review Signed-off-by: kolchfa-aws <105444904+kolchfa-aws@users.noreply.github.com> * Apply suggestions from code review Co-authored-by: Nathan Bower Signed-off-by: AntonEliatra * Update asciifolding.md Signed-off-by: AntonEliatra --------- Signed-off-by: AntonEliatra Signed-off-by: Anton Rubin Signed-off-by: kolchfa-aws <105444904+kolchfa-aws@users.noreply.github.com> Co-authored-by: kolchfa-aws <105444904+kolchfa-aws@users.noreply.github.com> Co-authored-by: Nathan Bower --- _analyzers/token-filters/asciifolding.md | 135 +++++++++++++++++++++++ _analyzers/token-filters/index.md | 2 +- 2 files changed, 136 insertions(+), 1 deletion(-) create mode 100644 _analyzers/token-filters/asciifolding.md diff --git a/_analyzers/token-filters/asciifolding.md b/_analyzers/token-filters/asciifolding.md new file mode 100644 index 0000000000..d572251988 --- /dev/null +++ b/_analyzers/token-filters/asciifolding.md @@ -0,0 +1,135 @@ +--- +layout: default +title: ASCII folding +parent: Token filters +nav_order: 20 +--- + +# ASCII folding token filter + +The `asciifolding` token filter converts non-ASCII characters to their closest ASCII equivalents. For example, *é* becomes *e*, *ü* becomes *u*, and *ñ* becomes *n*. This process is known as *transliteration*. + + +The `asciifolding` token filter offers a number of benefits: + + - **Enhanced search flexibility**: Users often omit accents or special characters when entering queries. The `asciifolding` token filter ensures that such queries still return relevant results. + - **Normalization**: Standardizes the indexing process by ensuring that accented characters are consistently converted to their ASCII equivalents. + - **Internationalization**: Particularly useful for applications including multiple languages and character sets. + +While the `asciifolding` token filter can simplify searches, it may also lead to the loss of specific information, particularly if the distinction between accented and non-accented characters in the dataset is significant. +{: .warning} + +## Parameters + +You can configure the `asciifolding` token filter using the `preserve_original` parameter. Setting this parameter to `true` keeps both the original token and its ASCII-folded version in the token stream. This can be particularly useful when you want to match both the original (with accents) and the normalized (without accents) versions of a term in a search query. Default is `false`. + +## Example + +The following example request creates a new index named `example_index` and defines an analyzer with the `asciifolding` filter and `preserve_original` parameter set to `true`: + +```json +PUT /example_index +{ + "settings": { + "analysis": { + "filter": { + "custom_ascii_folding": { + "type": "asciifolding", + "preserve_original": true + } + }, + "analyzer": { + "custom_ascii_analyzer": { + "type": "custom", + "tokenizer": "standard", + "filter": [ + "lowercase", + "custom_ascii_folding" + ] + } + } + } + } +} +``` +{% include copy-curl.html %} + +## Generated tokens + +Use the following request to examine the tokens generated using the analyzer: + +```json +POST /example_index/_analyze +{ + "analyzer": "custom_ascii_analyzer", + "text": "Résumé café naïve coördinate" +} +``` +{% include copy-curl.html %} + +The response contains the generated tokens: + +```json +{ + "tokens": [ + { + "token": "resume", + "start_offset": 0, + "end_offset": 6, + "type": "", + "position": 0 + }, + { + "token": "résumé", + "start_offset": 0, + "end_offset": 6, + "type": "", + "position": 0 + }, + { + "token": "cafe", + "start_offset": 7, + "end_offset": 11, + "type": "", + "position": 1 + }, + { + "token": "café", + "start_offset": 7, + "end_offset": 11, + "type": "", + "position": 1 + }, + { + "token": "naive", + "start_offset": 12, + "end_offset": 17, + "type": "", + "position": 2 + }, + { + "token": "naïve", + "start_offset": 12, + "end_offset": 17, + "type": "", + "position": 2 + }, + { + "token": "coordinate", + "start_offset": 18, + "end_offset": 28, + "type": "", + "position": 3 + }, + { + "token": "coördinate", + "start_offset": 18, + "end_offset": 28, + "type": "", + "position": 3 + } + ] +} +``` + + diff --git a/_analyzers/token-filters/index.md b/_analyzers/token-filters/index.md index f4e9c434e7..a9b621d5ab 100644 --- a/_analyzers/token-filters/index.md +++ b/_analyzers/token-filters/index.md @@ -14,7 +14,7 @@ The following table lists all token filters that OpenSearch supports. Token filter | Underlying Lucene token filter| Description [`apostrophe`]({{site.url}}{{site.baseurl}}/analyzers/token-filters/apostrophe/) | [ApostropheFilter](https://lucene.apache.org/core/9_10_0/analysis/common/org/apache/lucene/analysis/tr/ApostropheFilter.html) | In each token containing an apostrophe, the `apostrophe` token filter removes the apostrophe itself and all characters following it. -`asciifolding` | [ASCIIFoldingFilter](https://lucene.apache.org/core/9_10_0/analysis/common/org/apache/lucene/analysis/miscellaneous/ASCIIFoldingFilter.html) | Converts alphabetic, numeric, and symbolic characters. +[`asciifolding`]({{site.url}}{{site.baseurl}}/analyzers/token-filters/asciifolding/) | [ASCIIFoldingFilter](https://lucene.apache.org/core/9_10_0/analysis/common/org/apache/lucene/analysis/miscellaneous/ASCIIFoldingFilter.html) | Converts alphabetic, numeric, and symbolic characters. `cjk_bigram` | [CJKBigramFilter](https://lucene.apache.org/core/9_10_0/analysis/common/org/apache/lucene/analysis/cjk/CJKBigramFilter.html) | Forms bigrams of Chinese, Japanese, and Korean (CJK) tokens. `cjk_width` | [CJKWidthFilter](https://lucene.apache.org/core/9_10_0/analysis/common/org/apache/lucene/analysis/cjk/CJKWidthFilter.html) | Normalizes Chinese, Japanese, and Korean (CJK) tokens according to the following rules:
- Folds full-width ASCII character variants into the equivalent basic Latin characters.
- Folds half-width Katakana character variants into the equivalent Kana characters. `classic` | [ClassicFilter](https://lucene.apache.org/core/9_10_0/analysis/common/org/apache/lucene/analysis/classic/ClassicFilter.html) | Performs optional post-processing on the tokens generated by the classic tokenizer. Removes possessives (`'s`) and removes `.` from acronyms. From 9bc06e46376ea29cc7015e07a091333e583adb97 Mon Sep 17 00:00:00 2001 From: Ashish Singh Date: Wed, 11 Sep 2024 22:38:27 +0530 Subject: [PATCH 037/111] Create documentation for snapshots with hashed prefix path type (#8196) * Create documentation for snapshots with hashed prefix path type Signed-off-by: Ashish Singh * Add documentation on new cluster settings for fixed prefix Signed-off-by: Ashish Singh * Update create-repository.md * Update create-repository.md * Apply suggestions from code review Signed-off-by: Naarcha-AWS <97990722+Naarcha-AWS@users.noreply.github.com> * Apply suggestions from code review Signed-off-by: Naarcha-AWS <97990722+Naarcha-AWS@users.noreply.github.com> * Apply suggestions from code review Co-authored-by: Nathan Bower Signed-off-by: Naarcha-AWS <97990722+Naarcha-AWS@users.noreply.github.com> --------- Signed-off-by: Ashish Singh Signed-off-by: Naarcha-AWS <97990722+Naarcha-AWS@users.noreply.github.com> Co-authored-by: Naarcha-AWS <97990722+Naarcha-AWS@users.noreply.github.com> Co-authored-by: Nathan Bower --- _api-reference/snapshots/create-repository.md | 9 +++++++++ .../configuring-opensearch/index-settings.md | 6 ++++++ 2 files changed, 15 insertions(+) diff --git a/_api-reference/snapshots/create-repository.md b/_api-reference/snapshots/create-repository.md index 367aa3606a..34e2ea8376 100644 --- a/_api-reference/snapshots/create-repository.md +++ b/_api-reference/snapshots/create-repository.md @@ -43,6 +43,15 @@ The following table lists parameters that can be used with both the `fs` and `s3 Request field | Description :--- | :--- `prefix_mode_verification` | When enabled, adds a hashed value of a random seed to the prefix for repository verification. For remote-store-enabled clusters, you can add the `setting.prefix_mode_verification` setting to the node attributes for the supplied repository. This field works with both new and existing repositories. Optional. +`shard_path_type` | Controls the path structure of shard-level blobs. Supported values are `FIXED`, `HASHED_PREFIX`, and `HASHED_INFIX`. For more information about each value, see [shard_path_type values](#shard_path_type-values)/. Default is `FIXED`. Optional. + +#### shard_path_type values + +The following values are supported in the `shard_path_type` setting: + +- `FIXED`: Keeps the path structure in the existing hierarchical manner, such as `//indices//0/`. +- `HASHED_PREFIX`: Prepends a hashed prefix at the start of the path for each unique shard ID, for example, `///indices//0/`. +- `HASHED_INFIX`: Appends a hashed prefix after the base path for each unique shard ID, for example, `///indices//0/`. The hash method used is `FNV_1A_COMPOSITE_1`, which uses the `FNV1a` hash function and generates a custom-encoded 64-bit hash value that scales well with most remote store options. `FNV1a` takes the most significant 6 bits to create a URL-safe Base64 character and the next 14 bits to create a binary string. ### fs repository diff --git a/_install-and-configure/configuring-opensearch/index-settings.md b/_install-and-configure/configuring-opensearch/index-settings.md index a1894a0d2c..bd9b9651aa 100644 --- a/_install-and-configure/configuring-opensearch/index-settings.md +++ b/_install-and-configure/configuring-opensearch/index-settings.md @@ -73,6 +73,12 @@ OpenSearch supports the following dynamic cluster-level index settings: - `cluster.remote_store.segment.transfer_timeout` (Time unit): Controls the maximum amount of time to wait for all new segments to update after refresh to the remote store. If the upload does not complete within a specified amount of time, it throws a `SegmentUploadFailedException` error. Default is `30m`. It has a minimum constraint of `10m`. +- `cluster.remote_store.translog.path.prefix` (String): Controls the fixed path prefix for translog data on a remote-store-enabled cluster. This setting only applies when the `cluster.remote_store.index.path.type` setting is either `HASHED_PREFIX` or `HASHED_INFIX`. Default is an empty string, `""`. + +- `cluster.remote_store.segments.path.prefix` (String): Controls the fixed path prefix for segment data on a remote-store-enabled cluster. This setting only applies when the `cluster.remote_store.index.path.type` setting is either `HASHED_PREFIX` or `HASHED_INFIX`. Default is an empty string, `""`. + +- `cluster.snapshot.shard.path.prefix` (String): Controls the fixed path prefix for snapshot shard-level blobs. This setting only applies when the repository `shard_path_type` setting is either `HASHED_PREFIX` or `HASHED_INFIX`. Default is an empty string, `""`. + ## Index-level index settings You can specify index settings at index creation. There are two types of index settings: From 1fe62b09e0ea727ed16995cfcc00285a74251320 Mon Sep 17 00:00:00 2001 From: Harsha Vamsi Kalluri Date: Wed, 11 Sep 2024 11:02:39 -0700 Subject: [PATCH 038/111] Adding new cluster search setting docs (#8180) * Adding new cluster search setting docs Signed-off-by: Harsha Vamsi Kalluri * Update _install-and-configure/configuring-opensearch/search-settings.md Co-authored-by: kolchfa-aws <105444904+kolchfa-aws@users.noreply.github.com> Signed-off-by: Harsha Vamsi Kalluri --------- Signed-off-by: Harsha Vamsi Kalluri Co-authored-by: kolchfa-aws <105444904+kolchfa-aws@users.noreply.github.com> --- .../configuring-opensearch/search-settings.md | 2 ++ 1 file changed, 2 insertions(+) diff --git a/_install-and-configure/configuring-opensearch/search-settings.md b/_install-and-configure/configuring-opensearch/search-settings.md index c3c4337d01..e53f05aa64 100644 --- a/_install-and-configure/configuring-opensearch/search-settings.md +++ b/_install-and-configure/configuring-opensearch/search-settings.md @@ -39,6 +39,8 @@ OpenSearch supports the following search settings: - `search.dynamic_pruning.cardinality_aggregation.max_allowed_cardinality` (Dynamic, integer): Determines the threshold for applying dynamic pruning in cardinality aggregation. If a field’s cardinality exceeds this threshold, the aggregation reverts to the default method. This is an experimental feature and may change or be removed in future versions. +- `search.keyword_index_or_doc_values_enabled` (Dynamic, Boolean): Determines whether to use the index or doc values when running `multi_term` queries on `keyword` fields. Default value is `false`. + ## Point in Time settings For information about PIT settings, see [PIT settings]({{site.url}}{{site.baseurl}}/search-plugins/point-in-time-api/#pit-settings). From cdf4985aeeb7efba4f4a60982b5ce9eea66905c8 Mon Sep 17 00:00:00 2001 From: Siddhant Deshmukh Date: Wed, 11 Sep 2024 11:35:00 -0700 Subject: [PATCH 039/111] Grouping Top N queries documentation (#8173) * Grouping Top N queries documentation Signed-off-by: Siddhant Deshmukh * Fix dead links Signed-off-by: Siddhant Deshmukh * Fix dead link Signed-off-by: Siddhant Deshmukh * Fix dead links Signed-off-by: Siddhant Deshmukh * Address reviewdog comments Signed-off-by: Siddhant Deshmukh * reviewdog fix Signed-off-by: Siddhant Deshmukh * Doc review Signed-off-by: Fanit Kolchina * Add table Signed-off-by: Siddhant Deshmukh * Table review and added ability to collapse the response Signed-off-by: Fanit Kolchina * More explanation to a couple of parameters Signed-off-by: Fanit Kolchina * Typo fix Signed-off-by: Fanit Kolchina * Apply suggestions from code review Co-authored-by: Nathan Bower Signed-off-by: kolchfa-aws <105444904+kolchfa-aws@users.noreply.github.com> * Editorial comment Signed-off-by: Fanit Kolchina * Update _observing-your-data/query-insights/grouping-top-n-queries.md Co-authored-by: Nathan Bower Signed-off-by: kolchfa-aws <105444904+kolchfa-aws@users.noreply.github.com> --------- Signed-off-by: Siddhant Deshmukh Signed-off-by: Fanit Kolchina Signed-off-by: kolchfa-aws <105444904+kolchfa-aws@users.noreply.github.com> Co-authored-by: Fanit Kolchina Co-authored-by: kolchfa-aws <105444904+kolchfa-aws@users.noreply.github.com> Co-authored-by: Nathan Bower --- .../query-insights/grouping-top-n-queries.md | 331 ++++++++++++++++++ _observing-your-data/query-insights/index.md | 3 + .../query-insights/query-metrics.md | 4 +- 3 files changed, 337 insertions(+), 1 deletion(-) create mode 100644 _observing-your-data/query-insights/grouping-top-n-queries.md diff --git a/_observing-your-data/query-insights/grouping-top-n-queries.md b/_observing-your-data/query-insights/grouping-top-n-queries.md new file mode 100644 index 0000000000..28cbcbb8e5 --- /dev/null +++ b/_observing-your-data/query-insights/grouping-top-n-queries.md @@ -0,0 +1,331 @@ +--- +layout: default +title: Grouping top N queries +parent: Query insights +nav_order: 20 +--- + +# Grouping top N queries +**Introduced 2.17** +{: .label .label-purple } + +Monitoring the [top N queries]({{site.url}}{{site.baseurl}}/observing-your-data/query-insights/top-n-queries/) can help you to identify the most resource-intensive queries based on latency, CPU, and memory usage in a specified time window. However, if a single computationally expensive query is executed multiple times, it can occupy all top N query slots, potentially preventing other expensive queries from appearing in the list. To address this issue, you can group similar queries, gaining insight into various high-impact query groups. + +Starting with OpenSearch version 2.17, the top N queries can be grouped by `similarity`, with additional grouping options planned for future version releases. + +## Grouping queries by similarity + +Grouping queries by `similarity` organizes them based on the query structure, removing everything except the core query operations. + +For example, the following query: + +```json +{ + "query": { + "bool": { + "must": [ + { "exists": { "field": "field1" } } + ], + "query_string": { + "query": "search query" + } + } + } +} +``` + +Has the following corresponding query structure: + +```c +bool + must + exists + query_string +``` + +When queries share the same query structure, they are grouped together, ensuring that all similar queries belong to the same group. + + +## Aggregate metrics per group + +In addition to retrieving latency, CPU, and memory metrics for individual top N queries, you can obtain aggregate statistics for the +top N query groups. For each query group, the response includes the following statistics: +- The total latency, CPU usage, or memory usage (depending on the configured metric type) +- The total query count + +Using these statistics, you can calculate the average latency, CPU usage, or memory usage for each query group. +The response also includes one example query from the query group. + +## Configuring query grouping + +Before you enable query grouping, you must enable top N query monitoring for a metric type of your choice. For more information, see [Configuring top N query monitoring]({{site.url}}{{site.baseurl}}/observing-your-data/query-insights/top-n-queries/#configuring-top-n-query-monitoring). + +To configure grouping for top N queries, use the following steps. + +### Step 1: Enable top N query monitoring + +Ensure that top N query monitoring is enabled for at least one of the metrics: latency, CPU, or memory. For more information, see [Configuring top N query monitoring]({{site.url}}{{site.baseurl}}/observing-your-data/query-insights/top-n-queries/#configuring-top-n-query-monitoring). + +For example, to enable top N query monitoring by latency with the default settings, send the following request: + +```json +PUT _cluster/settings +{ + "persistent" : { + "search.insights.top_queries.latency.enabled" : true + } +} +``` +{% include copy-curl.html %} + +### Step 2: Configure query grouping + +Set the desired grouping method by updating the following cluster setting: + +```json +PUT _cluster/settings +{ + "persistent" : { + "search.insights.top_queries.group_by" : "similarity" + } +} +``` +{% include copy-curl.html %} + +The default value for the `group_by` setting is `none`, which disables grouping. As of OpenSearch 2.17, the supported values for `group_by` are `similarity` and `none`. + +### Step 3 (Optional): Limit the number of monitored query groups + +Optionally, you can limit the number of monitored query groups. Queries already included in the top N query list (the most resource-intensive queries) will not be considered in determining the limit. Essentially, the maximum applies only to other query groups, and the top N queries are tracked separately. This helps manage the tracking of query groups based on workload and query window size. + +To limit tracking to 100 query groups, send the following request: + +```json +PUT _cluster/settings +{ + "persistent" : { + "search.insights.top_queries.max_groups_excluding_topn" : 100 + } +} +``` +{% include copy-curl.html %} + +The default value for `max_groups_excluding_topn` is `100`, and you can set it to any value between `0` and `10,000`, inclusive. + +## Monitoring query groups + +To view the top N query groups, send the following request: + +```json +GET /_insights/top_queries +``` +{% include copy-curl.html %} + +The response contains the top N query groups: + +
+ + Response + + {: .text-delta} + +```json +{ + "top_queries": [ + { + "timestamp": 1725495127359, + "source": { + "query": { + "match_all": { + "boost": 1.0 + } + } + }, + "phase_latency_map": { + "expand": 0, + "query": 55, + "fetch": 3 + }, + "total_shards": 1, + "node_id": "ZbINz1KFS1OPeFmN-n5rdg", + "query_hashcode": "b4c4f69290df756021ca6276be5cbb75", + "task_resource_usages": [ + { + "action": "indices:data/read/search[phase/query]", + "taskId": 30, + "parentTaskId": 29, + "nodeId": "ZbINz1KFS1OPeFmN-n5rdg", + "taskResourceUsage": { + "cpu_time_in_nanos": 33249000, + "memory_in_bytes": 2896848 + } + }, + { + "action": "indices:data/read/search", + "taskId": 29, + "parentTaskId": -1, + "nodeId": "ZbINz1KFS1OPeFmN-n5rdg", + "taskResourceUsage": { + "cpu_time_in_nanos": 3151000, + "memory_in_bytes": 133936 + } + } + ], + "indices": [ + "my_index" + ], + "labels": {}, + "search_type": "query_then_fetch", + "measurements": { + "latency": { + "number": 160, + "count": 10, + "aggregationType": "AVERAGE" + } + } + }, + { + "timestamp": 1725495135160, + "source": { + "query": { + "term": { + "content": { + "value": "first", + "boost": 1.0 + } + } + } + }, + "phase_latency_map": { + "expand": 0, + "query": 18, + "fetch": 0 + }, + "total_shards": 1, + "node_id": "ZbINz1KFS1OPeFmN-n5rdg", + "query_hashcode": "c3620cc3d4df30fb3f95aeb2167289a4", + "task_resource_usages": [ + { + "action": "indices:data/read/search[phase/query]", + "taskId": 50, + "parentTaskId": 49, + "nodeId": "ZbINz1KFS1OPeFmN-n5rdg", + "taskResourceUsage": { + "cpu_time_in_nanos": 10188000, + "memory_in_bytes": 288136 + } + }, + { + "action": "indices:data/read/search", + "taskId": 49, + "parentTaskId": -1, + "nodeId": "ZbINz1KFS1OPeFmN-n5rdg", + "taskResourceUsage": { + "cpu_time_in_nanos": 262000, + "memory_in_bytes": 3216 + } + } + ], + "indices": [ + "my_index" + ], + "labels": {}, + "search_type": "query_then_fetch", + "measurements": { + "latency": { + "number": 109, + "count": 7, + "aggregationType": "AVERAGE" + } + } + }, + { + "timestamp": 1725495139766, + "source": { + "query": { + "match": { + "content": { + "query": "first", + "operator": "OR", + "prefix_length": 0, + "max_expansions": 50, + "fuzzy_transpositions": true, + "lenient": false, + "zero_terms_query": "NONE", + "auto_generate_synonyms_phrase_query": true, + "boost": 1.0 + } + } + } + }, + "phase_latency_map": { + "expand": 0, + "query": 15, + "fetch": 0 + }, + "total_shards": 1, + "node_id": "ZbINz1KFS1OPeFmN-n5rdg", + "query_hashcode": "484eaabecd13db65216b9e2ff5eee999", + "task_resource_usages": [ + { + "action": "indices:data/read/search[phase/query]", + "taskId": 64, + "parentTaskId": 63, + "nodeId": "ZbINz1KFS1OPeFmN-n5rdg", + "taskResourceUsage": { + "cpu_time_in_nanos": 12161000, + "memory_in_bytes": 473456 + } + }, + { + "action": "indices:data/read/search", + "taskId": 63, + "parentTaskId": -1, + "nodeId": "ZbINz1KFS1OPeFmN-n5rdg", + "taskResourceUsage": { + "cpu_time_in_nanos": 293000, + "memory_in_bytes": 3216 + } + } + ], + "indices": [ + "my_index" + ], + "labels": {}, + "search_type": "query_then_fetch", + "measurements": { + "latency": { + "number": 43, + "count": 3, + "aggregationType": "AVERAGE" + } + } + } + ] +} +``` + +
+ +## Response fields + +The response includes the following fields. + +Field | Data type | Description +:--- |:---| :--- +`top_queries` | Array | The list of top query groups. +`top_queries.timestamp` | Integer | The execution timestamp for the first query in the query group. +`top_queries.source` | Object | The first query in the query group. +`top_queries.phase_latency_map` | Object | The phase latency map for the first query in the query group. The map includes the amount of time, in milliseconds, that the query spent in the `expand`, `query`, and `fetch` phases. +`top_queries.total_shards` | Integer | The number of shards on which the first query was executed. +`top_queries.node_id` | String | The node ID of the node that coordinated the execution of the first query in the query group. +`top_queries.query_hashcode` | String | The hash code that uniquely identifies the query group. This is essentially the hash of the [query structure](#grouping-queries-by-similarity). +`top_queries.task_resource_usages` | Array of objects | The resource usage breakdown for the various tasks belonging to the first query in the query group. +`top_queries.indices` | Array | The indexes searched by the first query in the query group. +`top_queries.labels` | Object | Used to label the top query. +`top_queries.search_type` | String | The search request execution type (`query_then_fetch` or `dfs_query_then_fetch`). For more information, see the `search_type` parameter in the [Search API documentation]({{site.url}}{{site.baseurl}}/api-reference/search/#url-parameters). +`top_queries.measurements` | Object | The aggregate measurements for the query group. +`top_queries.measurements.latency` | Object | The aggregate latency measurements for the query group. +`top_queries.measurements.latency.number` | Integer | The total latency for the query group. +`top_queries.measurements.latency.count` | Integer | The number of queries in the query group. +`top_queries.measurements.latency.aggregationType` | String | The aggregation type for the current entry. If grouping by similarity is enabled, then `aggregationType` is `AVERAGE`. If it is not enabled, then `aggregationType` is `NONE`. \ No newline at end of file diff --git a/_observing-your-data/query-insights/index.md b/_observing-your-data/query-insights/index.md index b929e51491..ef3a65bfcd 100644 --- a/_observing-your-data/query-insights/index.md +++ b/_observing-your-data/query-insights/index.md @@ -7,6 +7,8 @@ has_toc: false --- # Query insights +**Introduced 2.12** +{: .label .label-purple } To monitor and analyze the search queries within your OpenSearch cluster, you can obtain query insights. With minimal performance impact, query insights features aim to provide comprehensive insights into search query execution, enabling you to better understand search query characteristics, patterns, and system behavior during query execution stages. Query insights facilitate enhanced detection, diagnosis, and prevention of query performance issues, ultimately improving query processing performance, user experience, and overall system resilience. @@ -36,4 +38,5 @@ For information about installing plugins, see [Installing plugins]({{site.url}}{ You can obtain the following information using Query Insights: - [Top n queries]({{site.url}}{{site.baseurl}}/observing-your-data/query-insights/top-n-queries/) +- [Grouping top N queries]({{site.url}}{{site.baseurl}}/observing-your-data/query-insights/grouping-top-n-queries/) - [Query metrics]({{site.url}}{{site.baseurl}}/observing-your-data/query-insights/query-metrics/) diff --git a/_observing-your-data/query-insights/query-metrics.md b/_observing-your-data/query-insights/query-metrics.md index c8caf21d65..beac8d4e18 100644 --- a/_observing-your-data/query-insights/query-metrics.md +++ b/_observing-your-data/query-insights/query-metrics.md @@ -2,10 +2,12 @@ layout: default title: Query metrics parent: Query insights -nav_order: 20 +nav_order: 30 --- # Query metrics +**Introduced 2.16** +{: .label .label-purple } Key query [metrics](#metrics), such as aggregation types, query types, latency, and resource usage per query type, are captured along the search path by using the OpenTelemetry (OTel) instrumentation framework. The telemetry data can be consumed using OTel metrics [exporters]({{site.url}}{{site.baseurl}}/observing-your-data/trace/distributed-tracing/#exporters). From f44deb242466533f8557e46422dc6885eb3c75ab Mon Sep 17 00:00:00 2001 From: Daniel Widdis Date: Wed, 11 Sep 2024 11:40:31 -0700 Subject: [PATCH 040/111] Document reprovision param for Update Workflow API (#8172) * Document reprovision param for Update Workflow API Signed-off-by: Daniel Widdis * Update _automating-configurations/api/create-workflow.md Co-authored-by: kolchfa-aws <105444904+kolchfa-aws@users.noreply.github.com> Signed-off-by: Daniel Widdis * Update _automating-configurations/api/create-workflow.md Co-authored-by: kolchfa-aws <105444904+kolchfa-aws@users.noreply.github.com> Signed-off-by: Daniel Widdis * Update _automating-configurations/api/create-workflow.md Co-authored-by: kolchfa-aws <105444904+kolchfa-aws@users.noreply.github.com> Signed-off-by: Daniel Widdis * Update _automating-configurations/api/create-workflow.md Co-authored-by: Nathan Bower Signed-off-by: Daniel Widdis --------- Signed-off-by: Daniel Widdis Co-authored-by: kolchfa-aws <105444904+kolchfa-aws@users.noreply.github.com> Co-authored-by: Nathan Bower --- _automating-configurations/api/create-workflow.md | 15 ++++++++++++++- 1 file changed, 14 insertions(+), 1 deletion(-) diff --git a/_automating-configurations/api/create-workflow.md b/_automating-configurations/api/create-workflow.md index 83c0110ac3..3fc16c754d 100644 --- a/_automating-configurations/api/create-workflow.md +++ b/_automating-configurations/api/create-workflow.md @@ -58,7 +58,7 @@ POST /_plugins/_flow_framework/workflow?validation=none ``` {% include copy-curl.html %} -You cannot update a full workflow once it has been provisioned, but you can update fields other than the `workflows` field, such as `name` and `description`: +In a workflow that has not been provisioned, you can update fields other than the `workflows` field. For example, you can update the `name` and `description` fields as follows: ```json PUT /_plugins/_flow_framework/workflow/?update_fields=true @@ -72,12 +72,25 @@ PUT /_plugins/_flow_framework/workflow/?update_fields=true You cannot specify both the `provision` and `update_fields` parameters at the same time. {: .note} +If a workflow has been provisioned, you can update and reprovision the full template: + +```json +PUT /_plugins/_flow_framework/workflow/?reprovision=true +{ + +} +``` + +You can add new steps to the workflow but cannot delete them. Only index setting, search pipeline, and ingest pipeline steps can currently be updated. +{: .note} + The following table lists the available query parameters. All query parameters are optional. User-provided parameters are only allowed if the `provision` parameter is set to `true`. | Parameter | Data type | Description | | :--- | :--- | :--- | | `provision` | Boolean | Whether to provision the workflow as part of the request. Default is `false`. | | `update_fields` | Boolean | Whether to update only the fields included in the request body. Default is `false`. | +| `reprovision` | Boolean | Whether to reprovision the entire template if it has already been provisioned. A complete template must be provided in the request body. Default is `false`. | | `validation` | String | Whether to validate the workflow. Valid values are `all` (validate the template) and `none` (do not validate the template). Default is `all`. | | User-provided substitution expressions | String | Parameters matching substitution expressions in the template. Only allowed if `provision` is set to `true`. Optional. If `provision` is set to `false`, you can pass these parameters in the [Provision Workflow API query parameters]({{site.url}}{{site.baseurl}}/automating-configurations/api/provision-workflow/#query-parameters). | From 01e3069c1746f93c53995921a7ecd4dbf6079712 Mon Sep 17 00:00:00 2001 From: David Zane <38449481+dzane17@users.noreply.github.com> Date: Wed, 11 Sep 2024 12:47:59 -0700 Subject: [PATCH 041/111] Update GET top N api documentation (#8139) * Update GET top N api documentation Signed-off-by: David Zane * Update _observing-your-data/query-insights/top-n-queries.md Co-authored-by: kolchfa-aws <105444904+kolchfa-aws@users.noreply.github.com> Signed-off-by: David Zane <38449481+dzane17@users.noreply.github.com> * Apply suggestions from code review Co-authored-by: Nathan Bower Signed-off-by: kolchfa-aws <105444904+kolchfa-aws@users.noreply.github.com> --------- Signed-off-by: David Zane Signed-off-by: David Zane <38449481+dzane17@users.noreply.github.com> Signed-off-by: kolchfa-aws <105444904+kolchfa-aws@users.noreply.github.com> Co-authored-by: kolchfa-aws <105444904+kolchfa-aws@users.noreply.github.com> Co-authored-by: Nathan Bower --- _observing-your-data/query-insights/top-n-queries.md | 9 ++++++--- 1 file changed, 6 insertions(+), 3 deletions(-) diff --git a/_observing-your-data/query-insights/top-n-queries.md b/_observing-your-data/query-insights/top-n-queries.md index f07fd2dfef..b63d670926 100644 --- a/_observing-your-data/query-insights/top-n-queries.md +++ b/_observing-your-data/query-insights/top-n-queries.md @@ -7,7 +7,7 @@ nav_order: 10 # Top N queries -Monitoring the top N queries in query insights features can help you gain real-time insights into the top queries with high latency within a certain time frame (for example, the last hour). +Monitoring the top N queries using query insights allows you to gain real-time visibility into the queries with the highest latency or resource consumption in a specified time period (for example, the last hour). ## Configuring top N query monitoring @@ -72,14 +72,14 @@ PUT _cluster/settings ## Monitoring the top N queries -You can use the Insights API endpoint to obtain the top N queries for all metric types: +You can use the Insights API endpoint to retrieve the top N queries. This API returns top N `latency` results by default. ```json GET /_insights/top_queries ``` {% include copy-curl.html %} -Specify a metric type to filter the response: +Specify the `type` parameter to retrieve the top N results for other metric types. The results will be sorted in descending order based on the specified metric type. ```json GET /_insights/top_queries?type=latency @@ -96,6 +96,9 @@ GET /_insights/top_queries?type=memory ``` {% include copy-curl.html %} +If your query returns no results, ensure that top N query monitoring is enabled for the target metric type and that search requests were made within the current [time window](#configuring-the-window-size). +{: .important} + ## Exporting top N query data You can configure your desired exporter to export top N query data to different sinks, allowing for better monitoring and analysis of your OpenSearch queries. Currently, the following exporters are supported: From 5145254a5c6f73fd6871d9d91bedc2e435f69b76 Mon Sep 17 00:00:00 2001 From: Somesh Gupta <35426854+aasom143@users.noreply.github.com> Date: Thu, 12 Sep 2024 01:20:57 +0530 Subject: [PATCH 042/111] =?UTF-8?q?Update=20doc=20for=20adding=20new=20par?= =?UTF-8?q?am=20in=20cat=20shards=20action=20for=20cancellation=E2=80=A6?= =?UTF-8?q?=20(#8127)?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit * Update doc for adding new param in cat shards action for cancellation support Signed-off-by: Somesh Gupta * Fixed comment Signed-off-by: Somesh Gupta --------- Signed-off-by: Somesh Gupta --- _api-reference/cat/cat-shards.md | 1 + 1 file changed, 1 insertion(+) diff --git a/_api-reference/cat/cat-shards.md b/_api-reference/cat/cat-shards.md index b07f11aca3..56817936a6 100644 --- a/_api-reference/cat/cat-shards.md +++ b/_api-reference/cat/cat-shards.md @@ -33,6 +33,7 @@ Parameter | Type | Description bytes | Byte size | Specify the units for byte size. For example, `7kb` or `6gb`. For more information, see [Supported units]({{site.url}}{{site.baseurl}}/opensearch/units/). local | Boolean | Whether to return information from the local node only instead of from the cluster manager node. Default is `false`. cluster_manager_timeout | Time | The amount of time to wait for a connection to the cluster manager node. Default is 30 seconds. +cancel_after_time_interval | Time | The amount of time after which the shard request will be canceled. Default is `-1`. time | Time | Specify the units for time. For example, `5d` or `7h`. For more information, see [Supported units]({{site.url}}{{site.baseurl}}/opensearch/units/). ## Example requests From 39338d7ff0b3737d4364475f945844d09276c123 Mon Sep 17 00:00:00 2001 From: yuye-aws Date: Thu, 12 Sep 2024 04:16:45 +0800 Subject: [PATCH 043/111] Add docs on skip_validating_missing_parameters in ml-commons connector (#8118) * add doc: skip_validating_missing_parameters Signed-off-by: yuye-aws * Doc review Signed-off-by: Fanit Kolchina * update: make both description consistent Signed-off-by: yuye-aws * Apply suggestions from code review Co-authored-by: Nathan Bower Signed-off-by: kolchfa-aws <105444904+kolchfa-aws@users.noreply.github.com> --------- Signed-off-by: yuye-aws Signed-off-by: Fanit Kolchina Signed-off-by: kolchfa-aws <105444904+kolchfa-aws@users.noreply.github.com> Co-authored-by: Fanit Kolchina Co-authored-by: kolchfa-aws <105444904+kolchfa-aws@users.noreply.github.com> Co-authored-by: Nathan Bower --- .../api/connector-apis/update-connector.md | 23 +++++++------ .../remote-models/blueprints.md | 32 +++++++++---------- 2 files changed, 29 insertions(+), 26 deletions(-) diff --git a/_ml-commons-plugin/api/connector-apis/update-connector.md b/_ml-commons-plugin/api/connector-apis/update-connector.md index 64790bb57f..625d58bb62 100644 --- a/_ml-commons-plugin/api/connector-apis/update-connector.md +++ b/_ml-commons-plugin/api/connector-apis/update-connector.md @@ -29,17 +29,20 @@ PUT /_plugins/_ml/connectors/ The following table lists the updatable fields. For more information about all connector fields, see [Blueprint configuration parameters]({{site.url}}{{site.baseurl}}/ml-commons-plugin/remote-models/blueprints#configuration-parameters). -| Field | Data type | Description | -| :--- | :--- | :--- | -| `name` | String | The name of the connector. | -| `description` | String | A description of the connector. | -| `version` | Integer | The version of the connector. | -| `protocol` | String | The protocol for the connection. For AWS services, such as Amazon SageMaker and Amazon Bedrock, use `aws_sigv4`. For all other services, use `http`. | -| `parameters` | JSON object | The default connector parameters, including `endpoint` and `model`. Any parameters included in this field can be overridden by parameters specified in a predict request. | +| Field | Data type | Description | +| :--- |:------------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| +| `name` | String | The name of the connector. | +| `description` | String | A description of the connector. | +| `version` | Integer | The connector version. | +| `protocol` | String | The protocol for the connection. For AWS services, such as Amazon SageMaker and Amazon Bedrock, use `aws_sigv4`. For all other services, use `http`. | +| `parameters` | JSON object | The default connector parameters, including `endpoint` and `model`. Any parameters included in this field can be overridden by parameters specified in a predict request. | | `credential` | JSON object | Defines any credential variables required in order to connect to your chosen endpoint. ML Commons uses **AES/GCM/NoPadding** symmetric encryption to encrypt your credentials. When the connection to the cluster first starts, OpenSearch creates a random 32-byte encryption key that persists in OpenSearch's system index. Therefore, you do not need to manually set the encryption key. | -| `actions` | JSON array | Defines which actions can run within the connector. If you're an administrator creating a connection, add the [blueprint]({{site.url}}{{site.baseurl}}/ml-commons-plugin/remote-models/blueprints/) for your desired connection. | -| `backend_roles` | JSON array | A list of OpenSearch backend roles. For more information about setting up backend roles, see [Assigning backend roles to users]({{site.url}}{{site.baseurl}}/ml-commons-plugin/model-access-control#assigning-backend-roles-to-users). | -| `access_mode` | String | Sets the access mode for the model, either `public`, `restricted`, or `private`. Default is `private`. For more information about `access_mode`, see [Model groups]({{site.url}}{{site.baseurl}}/ml-commons-plugin/model-access-control#model-groups). | +| `actions` | JSON array | Defines which actions can run within the connector. If you're an administrator creating a connection, add the [blueprint]({{site.url}}{{site.baseurl}}/ml-commons-plugin/remote-models/blueprints/) for your desired connection. | +| `backend_roles` | JSON array | A list of OpenSearch backend roles. For more information about setting up backend roles, see [Assigning backend roles to users]({{site.url}}{{site.baseurl}}/ml-commons-plugin/model-access-control#assigning-backend-roles-to-users). | +| `access_mode` | String | Sets the access mode for the model, either `public`, `restricted`, or `private`. Default is `private`. For more information about `access_mode`, see [Model groups]({{site.url}}{{site.baseurl}}/ml-commons-plugin/model-access-control#model-groups). | +| `parameters.skip_validating_missing_parameters` | Boolean | When set to `true`, this option allows you to send a request using a connector without validating any missing parameters. Default is `false`. | + + #### Example request diff --git a/_ml-commons-plugin/remote-models/blueprints.md b/_ml-commons-plugin/remote-models/blueprints.md index 254a21b068..9b95c31166 100644 --- a/_ml-commons-plugin/remote-models/blueprints.md +++ b/_ml-commons-plugin/remote-models/blueprints.md @@ -55,19 +55,20 @@ As an ML developer, you can build connector blueprints for other platforms. Usin ## Configuration parameters -| Field | Data type | Is required | Description | -|:---|:---|:---|:---| -| `name` | String | Yes | The name of the connector. | -| `description` | String | Yes | A description of the connector. | -| `version` | Integer | Yes | The version of the connector. | -| `protocol` | String | Yes | The protocol for the connection. For AWS services such as Amazon SageMaker and Amazon Bedrock, use `aws_sigv4`. For all other services, use `http`. | -| `parameters` | JSON object | Yes | The default connector parameters, including `endpoint` and `model`. Any parameters indicated in this field can be overridden by parameters specified in a predict request. | -| `credential` | JSON object | Yes | Defines any credential variables required to connect to your chosen endpoint. ML Commons uses **AES/GCM/NoPadding** symmetric encryption to encrypt your credentials. When the connection to the cluster first starts, OpenSearch creates a random 32-byte encryption key that persists in OpenSearch's system index. Therefore, you do not need to manually set the encryption key. | -| `actions` | JSON array | Yes | Defines what actions can run within the connector. If you're an administrator creating a connection, add the [blueprint]({{site.url}}{{site.baseurl}}/ml-commons-plugin/remote-models/blueprints/) for your desired connection. | -| `backend_roles` | JSON array | Yes | A list of OpenSearch backend roles. For more information about setting up backend roles, see [Assigning backend roles to users]({{site.url}}{{site.baseurl}}/ml-commons-plugin/model-access-control#assigning-backend-roles-to-users). | -| `access_mode` | String | Yes | Sets the access mode for the model, either `public`, `restricted`, or `private`. Default is `private`. For more information about `access_mode`, see [Model groups]({{site.url}}{{site.baseurl}}/ml-commons-plugin/model-access-control#model-groups). | -| `add_all_backend_roles` | Boolean | Yes | When set to `true`, adds all `backend_roles` to the access list, which only a user with admin permissions can adjust. When set to `false`, non-admins can add `backend_roles`. | -| `client_config` | JSON object | No | The client configuration object, which provides settings that control the behavior of the client connections used by the connector. These settings allow you to manage connection limits and timeouts, ensuring efficient and reliable communication. | +| Field | Data type | Is required | Description | +|:-------------------------------------------------|:---|:------------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| +| `name` | String | Yes | The name of the connector. | +| `description` | String | Yes | A description of the connector. | +| `version` | Integer | Yes | The connector version. | +| `protocol` | String | Yes | The protocol for the connection. For AWS services, such as Amazon SageMaker and Amazon Bedrock, use `aws_sigv4`. For all other services, use `http`. | +| `parameters` | JSON object | Yes | The default connector parameters, including `endpoint`, `model`, and `skip_validating_missing_parameters`. Any parameters indicated in this field can be overridden by parameters specified in a predict request. | +| `credential` | JSON object | Yes | Defines any credential variables required for connecting to your chosen endpoint. ML Commons uses **AES/GCM/NoPadding** symmetric encryption to encrypt your credentials. When the cluster connection is initiated, OpenSearch creates a random 32-byte encryption key that persists in OpenSearch's system index. Therefore, you do not need to manually set the encryption key. | +| `actions` | JSON array | Yes | Defines the actions that can run within the connector. If you're an administrator creating a connection, add the [blueprint]({{site.url}}{{site.baseurl}}/ml-commons-plugin/remote-models/blueprints/) for your desired connection. | +| `backend_roles` | JSON array | Yes | A list of OpenSearch backend roles. For more information about setting up backend roles, see [Assigning backend roles to users]({{site.url}}{{site.baseurl}}/ml-commons-plugin/model-access-control#assigning-backend-roles-to-users). | +| `access_mode` | String | Yes | Sets the access mode for the model, either `public`, `restricted`, or `private`. Default is `private`. For more information about `access_mode`, see [Model groups]({{site.url}}{{site.baseurl}}/ml-commons-plugin/model-access-control#model-groups). | +| `add_all_backend_roles` | Boolean | Yes | When set to `true`, adds all `backend_roles` to the access list, which only a user with admin permissions can adjust. When set to `false`, non-admins can add `backend_roles`. | +| `client_config` | JSON object | No | The client configuration object, which provides settings that control the behavior of the client connections used by the connector. These settings allow you to manage connection limits and timeouts, ensuring efficient and reliable communication. | +| `parameters.skip_validating_missing_parameters` | Boolean | No | When set to `true`, this option allows you to send a request using a connector without validating any missing parameters. Default is `false`. | The `actions` parameter supports the following options. @@ -76,12 +77,11 @@ The `actions` parameter supports the following options. |:---|:---|:---| | `action_type` | String | Required. Sets the ML Commons API operation to use upon connection. As of OpenSearch 2.9, only `predict` is supported. | | `method` | String | Required. Defines the HTTP method for the API call. Supports `POST` and `GET`. | -| `url` | String | Required. Sets the connection endpoint at which the action occurs. This must match the regex expression for the connection used when [adding trusted endpoints]({{site.url}}{{site.baseurl}}/ml-commons-plugin/remote-models/index#adding-trusted-endpoints). | -| `headers` | JSON object | Sets the headers used inside the request or response body. Default is `ContentType: application/json`. If your third-party ML tool requires access control, define the required `credential` parameters in the `headers` parameter. | +| `url` | String | Required. Specifies the connection endpoint at which the action occurs. This must match the regex expression for the connection used when [adding trusted endpoints]({{site.url}}{{site.baseurl}}/ml-commons-plugin/remote-models/index#adding-trusted-endpoints).| | `request_body` | String | Required. Sets the parameters contained in the request body of the action. The parameters must include `\"inputText\`, which specifies how users of the connector should construct the request payload for the `action_type`. | | `pre_process_function` | String | Optional. A built-in or custom Painless script used to preprocess the input data. OpenSearch provides the following built-in preprocess functions that you can call directly:
- `connector.pre_process.cohere.embedding` for [Cohere](https://cohere.com/) embedding models
- `connector.pre_process.openai.embedding` for [OpenAI](https://platform.openai.com/docs/guides/embeddings) embedding models
- `connector.pre_process.default.embedding`, which you can use to preprocess documents in neural search requests so that they are in the format that ML Commons can process with the default preprocessor (OpenSearch 2.11 or later). For more information, see [Built-in functions](#built-in-pre--and-post-processing-functions). | | `post_process_function` | String | Optional. A built-in or custom Painless script used to post-process the model output data. OpenSearch provides the following built-in post-process functions that you can call directly:
- `connector.pre_process.cohere.embedding` for [Cohere text embedding models](https://docs.cohere.com/reference/embed)
- `connector.pre_process.openai.embedding` for [OpenAI text embedding models](https://platform.openai.com/docs/api-reference/embeddings)
- `connector.post_process.default.embedding`, which you can use to post-process documents in the model response so that they are in the format that neural search expects (OpenSearch 2.11 or later). For more information, see [Built-in functions](#built-in-pre--and-post-processing-functions). | - +| `headers` | JSON object | Specifies the headers used in the request or response body. Default is `ContentType: application/json`. If your third-party ML tool requires access control, define the required `credential` parameters in the `headers` parameter. | The `client_config` parameter supports the following options. From 76486a4e6c779215580dd7bc0a5d924f5d3137f2 Mon Sep 17 00:00:00 2001 From: Daniel Widdis Date: Wed, 11 Sep 2024 13:55:06 -0700 Subject: [PATCH 044/111] Document use_case param for Create Workflow API, link to existing docs (#8171) * Document use_case param for Create Workflow API, link to existing docs Signed-off-by: Daniel Widdis * Update _automating-configurations/api/create-workflow.md Co-authored-by: kolchfa-aws <105444904+kolchfa-aws@users.noreply.github.com> Signed-off-by: Daniel Widdis * Update _automating-configurations/api/create-workflow.md Co-authored-by: kolchfa-aws <105444904+kolchfa-aws@users.noreply.github.com> Signed-off-by: Daniel Widdis * Update _automating-configurations/api/create-workflow.md Co-authored-by: kolchfa-aws <105444904+kolchfa-aws@users.noreply.github.com> Signed-off-by: Daniel Widdis * Update _automating-configurations/api/create-workflow.md Co-authored-by: Nathan Bower Signed-off-by: Daniel Widdis * Resolve merge conflicts Signed-off-by: Fanit Kolchina --------- Signed-off-by: Daniel Widdis Signed-off-by: kolchfa-aws <105444904+kolchfa-aws@users.noreply.github.com> Signed-off-by: Fanit Kolchina Co-authored-by: kolchfa-aws <105444904+kolchfa-aws@users.noreply.github.com> Co-authored-by: Nathan Bower Co-authored-by: Fanit Kolchina --- .../vale/styles/Vocab/OpenSearch/Words/accept.txt | 1 + _automating-configurations/api/create-workflow.md | 13 ++++++++++++- 2 files changed, 13 insertions(+), 1 deletion(-) diff --git a/.github/vale/styles/Vocab/OpenSearch/Words/accept.txt b/.github/vale/styles/Vocab/OpenSearch/Words/accept.txt index 11ff53efe6..d0d1c308eb 100644 --- a/.github/vale/styles/Vocab/OpenSearch/Words/accept.txt +++ b/.github/vale/styles/Vocab/OpenSearch/Words/accept.txt @@ -102,6 +102,7 @@ p\d{2} [Rr]eenable [Rr]eindex [Rr]eingest +[Rr]eprovision(ed|ing)? [Rr]erank(er|ed|ing)? [Rr]epo [Rr]ewriter diff --git a/_automating-configurations/api/create-workflow.md b/_automating-configurations/api/create-workflow.md index 3fc16c754d..770c1a1a13 100644 --- a/_automating-configurations/api/create-workflow.md +++ b/_automating-configurations/api/create-workflow.md @@ -84,6 +84,16 @@ PUT /_plugins/_flow_framework/workflow/?reprovision=true You can add new steps to the workflow but cannot delete them. Only index setting, search pipeline, and ingest pipeline steps can currently be updated. {: .note} +You can create and provision a workflow using a [workflow template]({{site.url}}{{site.baseurl}}/automating-configurations/workflow-templates/) as follows: + +```json +POST /_plugins/_flow_framework/workflow?use_case=&provision=true +{ + "create_connector.credential.key" : "" +} +``` +{% include copy-curl.html %} + The following table lists the available query parameters. All query parameters are optional. User-provided parameters are only allowed if the `provision` parameter is set to `true`. | Parameter | Data type | Description | @@ -92,6 +102,7 @@ The following table lists the available query parameters. All query parameters a | `update_fields` | Boolean | Whether to update only the fields included in the request body. Default is `false`. | | `reprovision` | Boolean | Whether to reprovision the entire template if it has already been provisioned. A complete template must be provided in the request body. Default is `false`. | | `validation` | String | Whether to validate the workflow. Valid values are `all` (validate the template) and `none` (do not validate the template). Default is `all`. | +| `use_case` | String | The name of the [workflow template]({{site.url}}{{site.baseurl}}/automating-configurations/workflow-templates/#supported-workflow-templates) to use when creating the workflow. | | User-provided substitution expressions | String | Parameters matching substitution expressions in the template. Only allowed if `provision` is set to `true`. Optional. If `provision` is set to `false`, you can pass these parameters in the [Provision Workflow API query parameters]({{site.url}}{{site.baseurl}}/automating-configurations/api/provision-workflow/#query-parameters). | ## Request fields @@ -102,7 +113,7 @@ The following table lists the available request fields. |:--- |:--- |:--- |:--- | |`name` |String |Required |The name of the workflow. | |`description` |String |Optional |A description of the workflow. | -|`use_case` |String |Optional | A use case, which can be used with the Search Workflow API to find related workflows. In the future, OpenSearch may provide some standard use cases to ease categorization, but currently you can use this field to specify custom values. | +|`use_case` |String |Optional | A user-provided use case, which can be used with the [Search Workflow API]({{site.url}}{{site.baseurl}}/automating-configurations/api/search-workflow/) to find related workflows. You can use this field to specify custom values. This is distinct from the `use_case` query parameter. | |`version` |Object |Optional | A key-value map with two fields: `template`, which identifies the template version, and `compatibility`, which identifies a list of minimum required OpenSearch versions. | |`workflows` |Object |Optional |A map of workflows. Presently, only the `provision` key is supported. The value for the workflow key is a key-value map that includes fields for `user_params` and lists of `nodes` and `edges`. | From efd011549a423e731229f1a9eeb5d864999954f9 Mon Sep 17 00:00:00 2001 From: kolchfa-aws <105444904+kolchfa-aws@users.noreply.github.com> Date: Thu, 12 Sep 2024 14:41:20 -0400 Subject: [PATCH 045/111] Added introduced version labels to field types (#8227) Signed-off-by: Fanit Kolchina --- _field-types/supported-field-types/alias.md | 2 ++ _field-types/supported-field-types/binary.md | 2 ++ _field-types/supported-field-types/boolean.md | 2 ++ _field-types/supported-field-types/completion.md | 2 ++ _field-types/supported-field-types/constant-keyword.md | 2 ++ _field-types/supported-field-types/date-nanos.md | 2 ++ _field-types/supported-field-types/date.md | 2 ++ _field-types/supported-field-types/flat-object.md | 2 ++ _field-types/supported-field-types/geo-point.md | 2 ++ _field-types/supported-field-types/geo-shape.md | 2 ++ _field-types/supported-field-types/ip.md | 2 ++ _field-types/supported-field-types/join.md | 2 ++ _field-types/supported-field-types/keyword.md | 2 ++ _field-types/supported-field-types/knn-vector.md | 2 ++ _field-types/supported-field-types/match-only-text.md | 2 ++ _field-types/supported-field-types/nested.md | 2 ++ _field-types/supported-field-types/object.md | 2 ++ _field-types/supported-field-types/percolator.md | 2 ++ _field-types/supported-field-types/range.md | 2 ++ _field-types/supported-field-types/rank.md | 2 ++ _field-types/supported-field-types/search-as-you-type.md | 2 ++ _field-types/supported-field-types/text.md | 2 ++ _field-types/supported-field-types/token-count.md | 2 ++ _field-types/supported-field-types/unsigned-long.md | 2 ++ _field-types/supported-field-types/wildcard.md | 2 ++ _field-types/supported-field-types/xy-point.md | 2 ++ _field-types/supported-field-types/xy-shape.md | 2 ++ 27 files changed, 54 insertions(+) diff --git a/_field-types/supported-field-types/alias.md b/_field-types/supported-field-types/alias.md index 29cc58885c..f1f6ae9ac8 100644 --- a/_field-types/supported-field-types/alias.md +++ b/_field-types/supported-field-types/alias.md @@ -10,6 +10,8 @@ redirect_from: --- # Alias field type +**Introduced 1.0** +{: .label .label-purple } An alias field type creates another name for an existing field. You can use aliases in the[search](#using-aliases-in-search-api-operations) and [field capabilities](#using-aliases-in-field-capabilities-api-operations) API operations, with some [exceptions](#exceptions). To set up an [alias](#alias-field), you need to specify the [original field](#original-field) name in the `path` parameter. diff --git a/_field-types/supported-field-types/binary.md b/_field-types/supported-field-types/binary.md index 99d468c1dc..bb257bf7ec 100644 --- a/_field-types/supported-field-types/binary.md +++ b/_field-types/supported-field-types/binary.md @@ -10,6 +10,8 @@ redirect_from: --- # Binary field type +**Introduced 1.0** +{: .label .label-purple } A binary field type contains a binary value in [Base64](https://en.wikipedia.org/wiki/Base64) encoding that is not searchable. diff --git a/_field-types/supported-field-types/boolean.md b/_field-types/supported-field-types/boolean.md index 8233a45ad5..82cfdecf47 100644 --- a/_field-types/supported-field-types/boolean.md +++ b/_field-types/supported-field-types/boolean.md @@ -10,6 +10,8 @@ redirect_from: --- # Boolean field type +**Introduced 1.0** +{: .label .label-purple } A Boolean field type takes `true` or `false` values, or `"true"` or `"false"` strings. You can also pass an empty string (`""`) in place of a `false` value. diff --git a/_field-types/supported-field-types/completion.md b/_field-types/supported-field-types/completion.md index 85c803baa1..e6e392fb6d 100644 --- a/_field-types/supported-field-types/completion.md +++ b/_field-types/supported-field-types/completion.md @@ -11,6 +11,8 @@ redirect_from: --- # Completion field type +**Introduced 1.0** +{: .label .label-purple } A completion field type provides autocomplete functionality through a completion suggester. The completion suggester is a prefix suggester, so it matches the beginning of text only. A completion suggester creates an in-memory data structure, which provides faster lookups but leads to increased memory usage. You need to upload a list of all possible completions into the index before using this feature. diff --git a/_field-types/supported-field-types/constant-keyword.md b/_field-types/supported-field-types/constant-keyword.md index bf1e4afc70..4f9261f1a1 100644 --- a/_field-types/supported-field-types/constant-keyword.md +++ b/_field-types/supported-field-types/constant-keyword.md @@ -8,6 +8,8 @@ grand_parent: Supported field types --- # Constant keyword field type +**Introduced 2.14** +{: .label .label-purple } A constant keyword field uses the same value for all documents in the index. diff --git a/_field-types/supported-field-types/date-nanos.md b/_field-types/supported-field-types/date-nanos.md index 12399a69d4..eb569265fc 100644 --- a/_field-types/supported-field-types/date-nanos.md +++ b/_field-types/supported-field-types/date-nanos.md @@ -8,6 +8,8 @@ grand_parent: Supported field types --- # Date nanoseconds field type +**Introduced 1.0** +{: .label .label-purple } The `date_nanos` field type is similar to the [`date`]({{site.url}}{{site.baseurl}}/opensearch/supported-field-types/date/) field type in that it holds a date. However, `date` stores the date in millisecond resolution, while `date_nanos` stores the date in nanosecond resolution. Dates are stored as `long` values that correspond to nanoseconds since the epoch. Therefore, the range of supported dates is approximately 1970--2262. diff --git a/_field-types/supported-field-types/date.md b/_field-types/supported-field-types/date.md index fb008d1512..8ca986219b 100644 --- a/_field-types/supported-field-types/date.md +++ b/_field-types/supported-field-types/date.md @@ -11,6 +11,8 @@ redirect_from: --- # Date field type +**Introduced 1.0** +{: .label .label-purple } A date in OpenSearch can be represented as one of the following: diff --git a/_field-types/supported-field-types/flat-object.md b/_field-types/supported-field-types/flat-object.md index 933c385ac5..c9e59710e1 100644 --- a/_field-types/supported-field-types/flat-object.md +++ b/_field-types/supported-field-types/flat-object.md @@ -10,6 +10,8 @@ redirect_from: --- # Flat object field type +**Introduced 2.7** +{: .label .label-purple } In OpenSearch, you don't have to specify a mapping before indexing documents. If you don't specify a mapping, OpenSearch uses [dynamic mapping]({{site.url}}{{site.baseurl}}/field-types/index#dynamic-mapping) to map every field and its subfields in the document automatically. When you ingest documents such as logs, you may not know every field's subfield name and type in advance. In this case, dynamically mapping all new subfields can quickly lead to a "mapping explosion," where the growing number of fields may degrade the performance of your cluster. diff --git a/_field-types/supported-field-types/geo-point.md b/_field-types/supported-field-types/geo-point.md index 0912dc618d..96586d044f 100644 --- a/_field-types/supported-field-types/geo-point.md +++ b/_field-types/supported-field-types/geo-point.md @@ -11,6 +11,8 @@ redirect_from: --- # Geopoint field type +**Introduced 1.0** +{: .label .label-purple } A geopoint field type contains a geographic point specified by latitude and longitude. diff --git a/_field-types/supported-field-types/geo-shape.md b/_field-types/supported-field-types/geo-shape.md index b7b06a0d04..ee98bfca03 100644 --- a/_field-types/supported-field-types/geo-shape.md +++ b/_field-types/supported-field-types/geo-shape.md @@ -11,6 +11,8 @@ redirect_from: --- # Geoshape field type +**Introduced 1.0** +{: .label .label-purple } A geoshape field type contains a geographic shape, such as a polygon or a collection of geographic points. To index a geoshape, OpenSearch tesselates the shape into a triangular mesh and stores each triangle in a BKD tree. This provides a 10-7decimal degree of precision, which represents near-perfect spatial resolution. Performance of this process is mostly impacted by the number of vertices in a polygon you are indexing. diff --git a/_field-types/supported-field-types/ip.md b/_field-types/supported-field-types/ip.md index cb2a5569c8..99b41e45cd 100644 --- a/_field-types/supported-field-types/ip.md +++ b/_field-types/supported-field-types/ip.md @@ -10,6 +10,8 @@ redirect_from: --- # IP address field type +**Introduced 1.0** +{: .label .label-purple } An ip field type contains an IP address in IPv4 or IPv6 format. diff --git a/_field-types/supported-field-types/join.md b/_field-types/supported-field-types/join.md index c83705f4c3..c707c66774 100644 --- a/_field-types/supported-field-types/join.md +++ b/_field-types/supported-field-types/join.md @@ -11,6 +11,8 @@ redirect_from: --- # Join field type +**Introduced 1.0** +{: .label .label-purple } A join field type establishes a parent/child relationship between documents in the same index. diff --git a/_field-types/supported-field-types/keyword.md b/_field-types/supported-field-types/keyword.md index eea6cc664b..ca9c8085f6 100644 --- a/_field-types/supported-field-types/keyword.md +++ b/_field-types/supported-field-types/keyword.md @@ -11,6 +11,8 @@ redirect_from: --- # Keyword field type +**Introduced 1.0** +{: .label .label-purple } A keyword field type contains a string that is not analyzed. It allows only exact, case-sensitive matches. diff --git a/_field-types/supported-field-types/knn-vector.md b/_field-types/supported-field-types/knn-vector.md index a2a7137733..80c4085485 100644 --- a/_field-types/supported-field-types/knn-vector.md +++ b/_field-types/supported-field-types/knn-vector.md @@ -8,6 +8,8 @@ has_math: true --- # k-NN vector field type +**Introduced 1.0** +{: .label .label-purple } The [k-NN plugin]({{site.url}}{{site.baseurl}}/search-plugins/knn/index/) introduces a custom data type, the `knn_vector`, that allows users to ingest their k-NN vectors into an OpenSearch index and perform different kinds of k-NN search. The `knn_vector` field is highly configurable and can serve many different k-NN workloads. In general, a `knn_vector` field can be built either by providing a method definition or specifying a model id. diff --git a/_field-types/supported-field-types/match-only-text.md b/_field-types/supported-field-types/match-only-text.md index fd2c6b5850..534275bd3a 100644 --- a/_field-types/supported-field-types/match-only-text.md +++ b/_field-types/supported-field-types/match-only-text.md @@ -8,6 +8,8 @@ grand_parent: Supported field types --- # Match-only text field type +**Introduced 2.12** +{: .label .label-purple } A `match_only_text` field is a variant of a `text` field designed for full-text search when scoring and positional information of terms within a document are not critical. diff --git a/_field-types/supported-field-types/nested.md b/_field-types/supported-field-types/nested.md index 90d09177d1..f8dfca2ff8 100644 --- a/_field-types/supported-field-types/nested.md +++ b/_field-types/supported-field-types/nested.md @@ -11,6 +11,8 @@ redirect_from: --- # Nested field type +**Introduced 1.0** +{: .label .label-purple } A nested field type is a special type of [object field type]({{site.url}}{{site.baseurl}}/opensearch/supported-field-types/object/). diff --git a/_field-types/supported-field-types/object.md b/_field-types/supported-field-types/object.md index db539a9608..ee50f5af9d 100644 --- a/_field-types/supported-field-types/object.md +++ b/_field-types/supported-field-types/object.md @@ -11,6 +11,8 @@ redirect_from: --- # Object field type +**Introduced 1.0** +{: .label .label-purple } An object field type contains a JSON object (a set of name/value pairs). A value in a JSON object may be another JSON object. It is not necessary to specify `object` as the type when mapping object fields because `object` is the default type. diff --git a/_field-types/supported-field-types/percolator.md b/_field-types/supported-field-types/percolator.md index 92325b6127..2b067cf595 100644 --- a/_field-types/supported-field-types/percolator.md +++ b/_field-types/supported-field-types/percolator.md @@ -10,6 +10,8 @@ redirect_from: --- # Percolator field type +**Introduced 1.0** +{: .label .label-purple } A percolator field type specifies to treat this field as a query. Any JSON object field can be marked as a percolator field. Normally, documents are indexed and searches are run against them. When you use a percolator field, you store a search, and later the percolate query matches documents to that search. diff --git a/_field-types/supported-field-types/range.md b/_field-types/supported-field-types/range.md index 22ae1d619e..1001bae584 100644 --- a/_field-types/supported-field-types/range.md +++ b/_field-types/supported-field-types/range.md @@ -10,6 +10,8 @@ redirect_from: --- # Range field types +**Introduced 1.0** +{: .label .label-purple } The following table lists all range field types that OpenSearch supports. diff --git a/_field-types/supported-field-types/rank.md b/_field-types/supported-field-types/rank.md index a4ec0fac4c..f57c540cf5 100644 --- a/_field-types/supported-field-types/rank.md +++ b/_field-types/supported-field-types/rank.md @@ -10,6 +10,8 @@ redirect_from: --- # Rank field types +**Introduced 1.0** +{: .label .label-purple } The following table lists all rank field types that OpenSearch supports. diff --git a/_field-types/supported-field-types/search-as-you-type.md b/_field-types/supported-field-types/search-as-you-type.md index b9141e6b8e..55774d432a 100644 --- a/_field-types/supported-field-types/search-as-you-type.md +++ b/_field-types/supported-field-types/search-as-you-type.md @@ -11,6 +11,8 @@ redirect_from: --- # Search-as-you-type field type +**Introduced 1.0** +{: .label .label-purple } A search-as-you-type field type provides search-as-you-type functionality using both prefix and infix completion. diff --git a/_field-types/supported-field-types/text.md b/_field-types/supported-field-types/text.md index 16350c0cb3..b06bec2187 100644 --- a/_field-types/supported-field-types/text.md +++ b/_field-types/supported-field-types/text.md @@ -11,6 +11,8 @@ redirect_from: --- # Text field type +**Introduced 1.0** +{: .label .label-purple } A `text` field type contains a string that is analyzed. It is used for full-text search because it allows partial matches. Searches for multiple terms can match some but not all of them. Depending on the analyzer, results can be case insensitive, stemmed, have stopwords removed, have synonyms applied, and so on. diff --git a/_field-types/supported-field-types/token-count.md b/_field-types/supported-field-types/token-count.md index 6c3445e6a7..11eeff7854 100644 --- a/_field-types/supported-field-types/token-count.md +++ b/_field-types/supported-field-types/token-count.md @@ -11,6 +11,8 @@ redirect_from: --- # Token count field type +**Introduced 1.0** +{: .label .label-purple } A token count field type stores the number of analyzed tokens in a string. diff --git a/_field-types/supported-field-types/unsigned-long.md b/_field-types/supported-field-types/unsigned-long.md index dde8d25dee..4c38cb3090 100644 --- a/_field-types/supported-field-types/unsigned-long.md +++ b/_field-types/supported-field-types/unsigned-long.md @@ -8,6 +8,8 @@ has_children: false --- # Unsigned long field type +**Introduced 2.8** +{: .label .label-purple } The `unsigned_long` field type is a numeric field type that represents an unsigned 64-bit integer with a minimum value of 0 and a maximum value of 264 − 1. In the following example, `counter` is mapped as an `unsigned_long` field: diff --git a/_field-types/supported-field-types/wildcard.md b/_field-types/supported-field-types/wildcard.md index c438f35c62..0f8c176135 100644 --- a/_field-types/supported-field-types/wildcard.md +++ b/_field-types/supported-field-types/wildcard.md @@ -8,6 +8,8 @@ grand_parent: Supported field types --- # Wildcard field type +**Introduced 2.15** +{: .label .label-purple } A `wildcard` field is a variant of a `keyword` field designed for arbitrary substring and regular expression matching. diff --git a/_field-types/supported-field-types/xy-point.md b/_field-types/supported-field-types/xy-point.md index 57b6f64758..0d066b9f09 100644 --- a/_field-types/supported-field-types/xy-point.md +++ b/_field-types/supported-field-types/xy-point.md @@ -11,6 +11,8 @@ redirect_from: --- # xy point field type +**Introduced 2.4** +{: .label .label-purple } An xy point field type contains a point in a two-dimensional Cartesian coordinate system, specified by x and y coordinates. It is based on the Lucene [XYPoint](https://lucene.apache.org/core/9_3_0/core/org/apache/lucene/geo/XYPoint.html) field type. The xy point field type is similar to the [geopoint]({{site.url}}{{site.baseurl}}/opensearch/supported-field-types/geo-point/) field type, but does not have the range limitations of geopoint. The coordinates of an xy point are single-precision floating-point values. For information about the range and precision of floating-point values, see [Numeric field types]({{site.url}}{{site.baseurl}}/opensearch/supported-field-types/numeric/). diff --git a/_field-types/supported-field-types/xy-shape.md b/_field-types/supported-field-types/xy-shape.md index f1c7191240..9dcbafceae 100644 --- a/_field-types/supported-field-types/xy-shape.md +++ b/_field-types/supported-field-types/xy-shape.md @@ -11,6 +11,8 @@ redirect_from: --- # xy shape field type +**Introduced 2.4** +{: .label .label-purple } An xy shape field type contains a shape, such as a polygon or a collection of xy points. It is based on the Lucene [XYShape](https://lucene.apache.org/core/9_3_0/core/org/apache/lucene/document/XYShape.html) field type. To index an xy shape, OpenSearch tessellates the shape into a triangular mesh and stores each triangle in a BKD tree (a set of balanced k-dimensional trees). This provides a 10-7decimal degree of precision, which represents near-perfect spatial resolution. From fd2e9fe32efee09a23665390f6089f9a4baf46b3 Mon Sep 17 00:00:00 2001 From: Andriy Redko Date: Fri, 13 Sep 2024 08:57:51 -0400 Subject: [PATCH 046/111] Document new experimental ingestion streaming APIs (#8123) * Document new experimental ingestion streaming APIs Signed-off-by: Andriy Redko * Doc review Signed-off-by: Fanit Kolchina * Small rewording Signed-off-by: Fanit Kolchina * Address review comments Signed-off-by: Andriy Redko * Address review comments Signed-off-by: Andriy Redko * Address review comments Signed-off-by: Andriy Redko * Address review comments Signed-off-by: Andriy Redko * Address review comments Signed-off-by: Andriy Redko * Address review comments Signed-off-by: Andriy Redko --------- Signed-off-by: Andriy Redko Signed-off-by: Fanit Kolchina Co-authored-by: Fanit Kolchina --- .../document-apis/bulk-streaming.md | 81 +++++++++++++++++++ _api-reference/document-apis/bulk.md | 12 +-- 2 files changed, 87 insertions(+), 6 deletions(-) create mode 100644 _api-reference/document-apis/bulk-streaming.md diff --git a/_api-reference/document-apis/bulk-streaming.md b/_api-reference/document-apis/bulk-streaming.md new file mode 100644 index 0000000000..7d05e93c8a --- /dev/null +++ b/_api-reference/document-apis/bulk-streaming.md @@ -0,0 +1,81 @@ +--- +layout: default +title: Streaming bulk +parent: Document APIs +nav_order: 25 +redirect_from: + - /opensearch/rest-api/document-apis/bulk/streaming/ +--- + +# Streaming bulk +**Introduced 2.17.0** +{: .label .label-purple } + +This is an experimental feature and is not recommended for use in a production environment. For updates on the progress of the feature or if you want to leave feedback, see the associated [GitHub issue](https://github.com/opensearch-project/OpenSearch/issues/9065). +{: .warning} + +The streaming bulk operation lets you add, update, or delete multiple documents by streaming the request and getting the results as a streaming response. In comparison to the traditional [Bulk API]({{site.url}}{{site.baseurl}}/api-reference/document-apis/bulk/), streaming ingestion eliminates the need to estimate the batch size (which is affected by the cluster operational state at any given time) and naturally applies backpressure between many clients and the cluster. The streaming works over HTTP/2 or HTTP/1.1 (using chunked transfer encoding), depending on the capabilities of the clients and the cluster. + +The default HTTP transport method does not support streaming. You must install the [`transport-reactor-netty4`]({{site.url}}{{site.baseurl}}/install-and-configure/configuring-opensearch/network-settings/#selecting-the-transport) HTTP transport plugin and use it as the default HTTP transport layer. Both the `transport-reactor-netty4` plugin and the Streaming Bulk API are experimental. +{: .note} + +## Path and HTTP methods + +```json +POST _bulk/stream +POST /_bulk/stream +``` + +If you specify the index in the path, then you don't need to include it in the [request body chunks]({{site.url}}{{site.baseurl}}/api-reference/document-apis/bulk/#request-body). + +OpenSearch also accepts PUT requests to the `_bulk/stream` path, but we highly recommend using POST. The accepted usage of PUT---adding or replacing a single resource on a given path---doesn't make sense for streaming bulk requests. +{: .note } + + +## Query parameters + +The following table lists the available query parameters. All query parameters are optional. + +Parameter | Data type | Description +:--- | :--- | :--- +`pipeline` | String | The pipeline ID for preprocessing documents. +`refresh` | Enum | Whether to refresh the affected shards after performing the indexing operations. Default is `false`. `true` causes the changes show up in search results immediately but degrades cluster performance. `wait_for` waits for a refresh. Requests take longer to return, but cluster performance isn't degraded. +`require_alias` | Boolean | Set to `true` to require that all actions target an index alias rather than an index. Default is `false`. +`routing` | String | Routes the request to the specified shard. +`timeout` | Time | How long to wait for the request to return. Default is `1m`. +`type` | String | (Deprecated) The default document type for documents that don't specify a type. Default is `_doc`. We highly recommend ignoring this parameter and using the `_doc` type for all indexes. +`wait_for_active_shards` | String | Specifies the number of active shards that must be available before OpenSearch processes the bulk request. Default is `1` (only the primary shard). Set to `all` or a positive integer. Values greater than 1 require replicas. For example, if you specify a value of 3, the index must have 2 replicas distributed across 2 additional nodes in order for the request to succeed. +`batch_interval` | Time | Specifies for how long bulk operations should be accumulated into a batch before sending the batch to data nodes. +`batch_size` | Time | Specifies how many bulk operations should be accumulated into a batch before sending the batch to data nodes. Default is `1`. +{% comment %}_source | List | asdf +`_source_excludes` | List | asdf +`_source_includes` | List | asdf{% endcomment %} + +## Request body + +The Streaming Bulk API request body is fully compatible with the [Bulk API request body]({{site.url}}{{site.baseurl}}/api-reference/document-apis/bulk/#request-body), where each bulk operation (create/index/update/delete) is sent as a separate chunk. + +## Example request + +```json +curl -X POST "http://localhost:9200/_bulk/stream" -H "Transfer-Encoding: chunked" -H "Content-Type: application/json" -d' +{ "delete": { "_index": "movies", "_id": "tt2229499" } } +{ "index": { "_index": "movies", "_id": "tt1979320" } } +{ "title": "Rush", "year": 2013 } +{ "create": { "_index": "movies", "_id": "tt1392214" } } +{ "title": "Prisoners", "year": 2013 } +{ "update": { "_index": "movies", "_id": "tt0816711" } } +{ "doc" : { "title": "World War Z" } } +' +``` +{% include copy.html %} + +## Example response + +Depending on the batch settings, each streamed response chunk may report the results of one or many (batch) bulk operations. For example, for the preceding request with no batching (default), the streaming response may appear as follows: + +```json +{"took": 11, "errors": false, "items": [ { "index": {"_index": "movies", "_id": "tt1979320", "_version": 1, "result": "created", "_shards": { "total": 2 "successful": 1, "failed": 0 }, "_seq_no": 1, "_primary_term": 1, "status": 201 } } ] } +{"took": 2, "errors": true, "items": [ { "create": { "_index": "movies", "_id": "tt1392214", "status": 409, "error": { "type": "version_conflict_engine_exception", "reason": "[tt1392214]: version conflict, document already exists (current version [1])", "index": "movies", "shard": "0", "index_uuid": "yhizhusbSWmP0G7OJnmcLg" } } } ] } +{"took": 4, "errors": true, "items": [ { "update": { "_index": "movies", "_id": "tt0816711", "status": 404, "error": { "type": "document_missing_exception", "reason": "[_doc][tt0816711]: document missing", "index": "movies", "shard": "0", "index_uuid": "yhizhusbSWmP0G7OJnmcLg" } } } ] } +``` diff --git a/_api-reference/document-apis/bulk.md b/_api-reference/document-apis/bulk.md index 0475aa573d..4add60ee37 100644 --- a/_api-reference/document-apis/bulk.md +++ b/_api-reference/document-apis/bulk.md @@ -53,16 +53,16 @@ All bulk URL parameters are optional. Parameter | Type | Description :--- | :--- | :--- pipeline | String | The pipeline ID for preprocessing documents. -refresh | Enum | Whether to refresh the affected shards after performing the indexing operations. Default is `false`. `true` makes the changes show up in search results immediately, but hurts cluster performance. `wait_for` waits for a refresh. Requests take longer to return, but cluster performance doesn't suffer. +refresh | Enum | Whether to refresh the affected shards after performing the indexing operations. Default is `false`. `true` causes the changes show up in search results immediately but degrades cluster performance. `wait_for` waits for a refresh. Requests take longer to return, but cluster performance isn't degraded. require_alias | Boolean | Set to `true` to require that all actions target an index alias rather than an index. Default is `false`. routing | String | Routes the request to the specified shard. -timeout | Time | How long to wait for the request to return. Default `1m`. -type | String | (Deprecated) The default document type for documents that don't specify a type. Default is `_doc`. We highly recommend ignoring this parameter and using a type of `_doc` for all indexes. -wait_for_active_shards | String | Specifies the number of active shards that must be available before OpenSearch processes the bulk request. Default is 1 (only the primary shard). Set to `all` or a positive integer. Values greater than 1 require replicas. For example, if you specify a value of 3, the index must have two replicas distributed across two additional nodes for the request to succeed. +timeout | Time | How long to wait for the request to return. Default is `1m`. +type | String | (Deprecated) The default document type for documents that don't specify a type. Default is `_doc`. We highly recommend ignoring this parameter and using the `_doc` type for all indexes. +wait_for_active_shards | String | Specifies the number of active shards that must be available before OpenSearch processes the bulk request. Default is `1` (only the primary shard). Set to `all` or a positive integer. Values greater than 1 require replicas. For example, if you specify a value of 3, the index must have 2 replicas distributed across 2 additional nodes in order for the request to succeed. batch_size | Integer | **(Deprecated)** Specifies the number of documents to be batched and sent to an ingest pipeline to be processed together. Default is `2147483647` (documents are ingested by an ingest pipeline all at once). If the bulk request doesn't explicitly specify an ingest pipeline or the index doesn't have a default ingest pipeline, then this parameter is ignored. Only documents with `create`, `index`, or `update` actions can be grouped into batches. {% comment %}_source | List | asdf -_source_excludes | list | asdf -_source_includes | list | asdf{% endcomment %} +_source_excludes | List | asdf +_source_includes | List | asdf{% endcomment %} ## Request body From 4c1e7825e8c31378aef8dcefb9f33869db591f3e Mon Sep 17 00:00:00 2001 From: bowenlan-amzn Date: Fri, 13 Sep 2024 05:58:20 -0700 Subject: [PATCH 047/111] Terms query can accept encoded terms input as bitmap (#8133) * draft Signed-off-by: bowenlan-amzn * Doc review Signed-off-by: Fanit Kolchina * Update _query-dsl/term/terms.md Signed-off-by: kolchfa-aws <105444904+kolchfa-aws@users.noreply.github.com> * Apply suggestions from code review Co-authored-by: Nathan Bower Signed-off-by: kolchfa-aws <105444904+kolchfa-aws@users.noreply.github.com> --------- Signed-off-by: bowenlan-amzn Signed-off-by: Fanit Kolchina Signed-off-by: kolchfa-aws <105444904+kolchfa-aws@users.noreply.github.com> Co-authored-by: Fanit Kolchina Co-authored-by: kolchfa-aws <105444904+kolchfa-aws@users.noreply.github.com> Co-authored-by: Nathan Bower --- _query-dsl/term/terms.md | 134 +++++++++++++++++++++++++++++++++++++++ 1 file changed, 134 insertions(+) diff --git a/_query-dsl/term/terms.md b/_query-dsl/term/terms.md index 42c74c0436..7dac6a9619 100644 --- a/_query-dsl/term/terms.md +++ b/_query-dsl/term/terms.md @@ -39,6 +39,7 @@ Parameter | Data type | Description :--- | :--- | :--- `` | String | The field in which to search. A document is returned in the results only if its field value exactly matches at least one term, with the correct spacing and capitalization. `boost` | Floating-point | A floating-point value that specifies the weight of this field toward the relevance score. Values above 1.0 increase the field’s relevance. Values between 0.0 and 1.0 decrease the field’s relevance. Default is 1.0. +`value_type` | String | Specifies the types of values used for filtering. Valid values are `default` and `bitmap`. If omitted, the value defaults to `default`. ## Terms lookup @@ -250,3 +251,136 @@ Parameter | Data type | Description `path` | String | The name of the field from which to fetch field values. Specify nested fields using dot path notation. Required. `routing` | String | Custom routing value of the document from which to fetch field values. Optional. Required if a custom routing value was provided when the document was indexed. `boost` | Floating-point | A floating-point value that specifies the weight of this field toward the relevance score. Values above 1.0 increase the field’s relevance. Values between 0.0 and 1.0 decrease the field’s relevance. Default is 1.0. + +## Bitmap filtering +**Introduced 2.17** +{: .label .label-purple } + +The `terms` query can filter for multiple terms simultaneously. However, when the number of terms in the input filter increases to a large value (around 10,000), the resulting network and memory overhead can become significant, making the query inefficient. In such cases, consider encoding your large terms filter using a [roaring bitmap](https://github.com/RoaringBitmap/RoaringBitmap) for more efficient filtering. + +The following example assumes that you have two indexes: a `products` index, which contains all the products sold by a company, and a `customers` index, which stores filters representing customers who own specific products. + +First, create a `products` index and map `product_id` as a `keyword`: + +```json +PUT /products +{ + "mappings": { + "properties": { + "product_id": { "type": "keyword" } + } + } +} +``` +{% include copy-curl.html %} + +Next, index three documents that correspond to products: + +```json +PUT students/_doc/1 +{ + "name": "Product 1", + "product_id" : "111" +} +``` +{% include copy-curl.html %} + +```json +PUT students/_doc/2 +{ + "name": "Product 2", + "product_id" : "222" +} +``` +{% include copy-curl.html %} + +```json +PUT students/_doc/3 +{ + "name": "Product 3", + "product_id" : "333" +} +``` +{% include copy-curl.html %} + +To store customer bitmap filters, you'll create a `customer_filter` [binary field](https://opensearch.org/docs/latest/field-types/supported-field-types/binary/) in the `customers` index. Specify `store` as `true` to store the field: + +```json +PUT /customers +{ + "mappings": { + "properties": { + "customer_filter": { + "type": "binary", + "store": true + } + } + } +} +``` +{% include copy-curl.html %} + +For each customer, you need to generate a bitmap that represents the product IDs of the products the customer owns. This bitmap effectively encodes the filter criteria for that customer. In this example, you'll create a `terms` filter for a customer whose ID is `customer123` and who owns products `111`, `222`, and `333`. + +To encode a `terms` filter for the customer, first create a roaring bitmap for the filter. This example creates a bitmap using the [PyRoaringBitMap] library, so first run `pip install pyroaring` to install the library. Then serialize the bitmap and encode it using a [Base64](https://en.wikipedia.org/wiki/Base64) encoding scheme: + +```py +from pyroaring import BitMap +import base64 + +# Create a bitmap, serialize it into a byte string, and encode into Base64 +bm = BitMap([111, 222, 333]) # product ids owned by a customer +encoded = base64.b64encode(BitMap.serialize(bm)) + +# Convert the Base64-encoded bytes to a string for storage or transmission +encoded_bm_str = encoded.decode('utf-8') + +# Print the encoded bitmap +print(f"Encoded Bitmap: {encoded_bm_str}") +``` +{% include copy.html %} + +Next, index the customer filter into the `customers` index. The document ID for the filter is the same as the ID for the corresponding customer (in this example, `customer123`). The `customer_filter` field contains the bitmap you generated for this customer: + +```json +POST customers/_doc/customer123 +{ + "customer_filter": "OjAAAAEAAAAAAAIAEAAAAG8A3gBNAQ==" +} +``` +{% include copy-curl.html %} + +Now you can run a `terms` query on the `products` index to look up a specific customer in the `customers` index. Because you're looking up a stored field instead of `_source`, set `store` to `true`. In the `value_type` field, specify the data type of the `terms` input as `bitmap`: + +```json +POST /products/_search +{ + "query": { + "terms": { + "product_id": { + "index": "customers", + "id": "customer123", + "path": "customer_filter", + "store": true + }, + "value_type": "bitmap" + } + } +} +``` +{% include copy-curl.html %} + +You can also directly pass the bitmap to the `terms` query. In this example, the `product_id` field contains the customer filter bitmap for the customer whose ID is `customer123`: + +```json +POST /products/_search +{ + "query": { + "terms": { + "product_id": "OjAAAAEAAAAAAAIAEAAAAG8A3gBNAQ==", + "value_type": "bitmap" + } + } +} +``` +{% include copy-curl.html %} \ No newline at end of file From 27c02f95de42598c518bb446fce043869daab51a Mon Sep 17 00:00:00 2001 From: Marc Handalian Date: Fri, 13 Sep 2024 06:04:12 -0700 Subject: [PATCH 048/111] Derived field updates for 2.17 (#8244) * update aggregation support in limitations and add sample req/response. Signed-off-by: Marc Handalian * remove concurrent search limitation Signed-off-by: Marc Handalian * minor updates Signed-off-by: Marc Handalian * fix style job error Signed-off-by: Marc Handalian * Doc review Signed-off-by: Fanit Kolchina * Clarify the list of unsupported geo aggregations Signed-off-by: Fanit Kolchina * Apply suggestions from code review Co-authored-by: Nathan Bower Signed-off-by: kolchfa-aws <105444904+kolchfa-aws@users.noreply.github.com> --------- Signed-off-by: Marc Handalian Signed-off-by: Fanit Kolchina Signed-off-by: kolchfa-aws <105444904+kolchfa-aws@users.noreply.github.com> Co-authored-by: Fanit Kolchina Co-authored-by: kolchfa-aws <105444904+kolchfa-aws@users.noreply.github.com> Co-authored-by: Nathan Bower --- _field-types/supported-field-types/derived.md | 78 ++++++++++++++++++- 1 file changed, 76 insertions(+), 2 deletions(-) diff --git a/_field-types/supported-field-types/derived.md b/_field-types/supported-field-types/derived.md index d989c3e4a4..cb358f47f9 100644 --- a/_field-types/supported-field-types/derived.md +++ b/_field-types/supported-field-types/derived.md @@ -28,11 +28,11 @@ Despite the potential performance impact of query-time computations, the flexibi Currently, derived fields have the following limitations: -- **Aggregation, scoring, and sorting**: Not yet supported. +- **Scoring and sorting**: Not yet supported. +- **Aggregations**: Starting with OpenSearch 2.17, derived fields support most aggregation types. The following aggregations are not supported: geographic (geodistance, geohash grid, geohex grid, geotile grid, geobounds, geocentroid), significant terms, significant text, and scripted metric. - **Dashboard support**: These fields are not displayed in the list of available fields in OpenSearch Dashboards. However, you can still use them for filtering if you know the derived field name. - **Chained derived fields**: One derived field cannot be used to define another derived field. - **Join field type**: Derived fields are not supported for the [join field type]({{site.url}}{{site.baseurl}}/opensearch/supported-field-types/join/). -- **Concurrent segment search**: Derived fields are not supported for [concurrent segment search]({{site.url}}{{site.baseurl}}/search-plugins/concurrent-segment-search/). We are planning to address these limitations in future versions. @@ -541,6 +541,80 @@ The response specifies highlighting in the `url` field: ``` +## Aggregations + +Starting with OpenSearch 2.17, derived fields support most aggregation types. + +Geographic, significant terms, significant text, and scripted metric aggregations are not supported. +{: .note} + +For example, the following request creates a simple `terms` aggregation on the `method` derived field: + +```json +POST /logs/_search +{ + "size": 0, + "aggs": { + "methods": { + "terms": { + "field": "method" + } + } + } +} +``` +{% include copy-curl.html %} + +The response contains the following buckets: + +
+ + Response + + {: .text-delta} + +```json +{ + "took" : 12, + "timed_out" : false, + "_shards" : { + "total" : 1, + "successful" : 1, + "skipped" : 0, + "failed" : 0 + }, + "hits" : { + "total" : { + "value" : 5, + "relation" : "eq" + }, + "max_score" : null, + "hits" : [ ] + }, + "aggregations" : { + "methods" : { + "doc_count_error_upper_bound" : 0, + "sum_other_doc_count" : 0, + "buckets" : [ + { + "key" : "GET", + "doc_count" : 2 + }, + { + "key" : "POST", + "doc_count" : 2 + }, + { + "key" : "DELETE", + "doc_count" : 1 + } + ] + } + } +} +``` +
+ ## Performance Derived fields are not indexed but are computed dynamically by retrieving values from the `_source` field or doc values. Thus, they run more slowly. To improve performance, try the following: From 8ba63fe00b3a3c1212c05cdc7157d2a7c48e5787 Mon Sep 17 00:00:00 2001 From: Ling Hengqian Date: Fri, 13 Sep 2024 21:14:39 +0800 Subject: [PATCH 049/111] Add the `pipeline` subdocument to the `Sources` document (#8211) Signed-off-by: linghengqian Co-authored-by: kolchfa-aws <105444904+kolchfa-aws@users.noreply.github.com> --- .../configuration/sources/documentdb.md | 2 +- .../configuration/sources/dynamo-db.md | 2 +- .../pipelines/configuration/sources/http.md | 2 +- .../pipelines/configuration/sources/kafka.md | 2 +- .../configuration/sources/opensearch.md | 2 +- .../configuration/sources/otel-logs-source.md | 2 +- .../sources/otel-metrics-source.md | 2 +- .../sources/otel-trace-source.md | 2 +- .../configuration/sources/pipeline.md | 31 +++++++++++++++++++ .../pipelines/configuration/sources/s3.md | 2 +- .../configuration/sources/sources.md | 2 +- 11 files changed, 41 insertions(+), 10 deletions(-) create mode 100644 _data-prepper/pipelines/configuration/sources/pipeline.md diff --git a/_data-prepper/pipelines/configuration/sources/documentdb.md b/_data-prepper/pipelines/configuration/sources/documentdb.md index c453b60a39..d3dd31edcb 100644 --- a/_data-prepper/pipelines/configuration/sources/documentdb.md +++ b/_data-prepper/pipelines/configuration/sources/documentdb.md @@ -3,7 +3,7 @@ layout: default title: documentdb parent: Sources grand_parent: Pipelines -nav_order: 2 +nav_order: 10 --- # documentdb diff --git a/_data-prepper/pipelines/configuration/sources/dynamo-db.md b/_data-prepper/pipelines/configuration/sources/dynamo-db.md index e465f45044..c5a7c8d188 100644 --- a/_data-prepper/pipelines/configuration/sources/dynamo-db.md +++ b/_data-prepper/pipelines/configuration/sources/dynamo-db.md @@ -3,7 +3,7 @@ layout: default title: dynamodb parent: Sources grand_parent: Pipelines -nav_order: 3 +nav_order: 20 --- # dynamodb diff --git a/_data-prepper/pipelines/configuration/sources/http.md b/_data-prepper/pipelines/configuration/sources/http.md index 06933edc1c..2171d1ea02 100644 --- a/_data-prepper/pipelines/configuration/sources/http.md +++ b/_data-prepper/pipelines/configuration/sources/http.md @@ -3,7 +3,7 @@ layout: default title: http parent: Sources grand_parent: Pipelines -nav_order: 5 +nav_order: 30 redirect_from: - /data-prepper/pipelines/configuration/sources/http-source/ --- diff --git a/_data-prepper/pipelines/configuration/sources/kafka.md b/_data-prepper/pipelines/configuration/sources/kafka.md index e8452a93c3..ecd7c7eaa0 100644 --- a/_data-prepper/pipelines/configuration/sources/kafka.md +++ b/_data-prepper/pipelines/configuration/sources/kafka.md @@ -3,7 +3,7 @@ layout: default title: kafka parent: Sources grand_parent: Pipelines -nav_order: 6 +nav_order: 40 --- # kafka diff --git a/_data-prepper/pipelines/configuration/sources/opensearch.md b/_data-prepper/pipelines/configuration/sources/opensearch.md index a7ba965729..1ee2237575 100644 --- a/_data-prepper/pipelines/configuration/sources/opensearch.md +++ b/_data-prepper/pipelines/configuration/sources/opensearch.md @@ -3,7 +3,7 @@ layout: default title: opensearch parent: Sources grand_parent: Pipelines -nav_order: 30 +nav_order: 50 --- # opensearch diff --git a/_data-prepper/pipelines/configuration/sources/otel-logs-source.md b/_data-prepper/pipelines/configuration/sources/otel-logs-source.md index 068369efaf..38095d7d7f 100644 --- a/_data-prepper/pipelines/configuration/sources/otel-logs-source.md +++ b/_data-prepper/pipelines/configuration/sources/otel-logs-source.md @@ -3,7 +3,7 @@ layout: default title: otel_logs_source parent: Sources grand_parent: Pipelines -nav_order: 25 +nav_order: 60 --- # otel_logs_source diff --git a/_data-prepper/pipelines/configuration/sources/otel-metrics-source.md b/_data-prepper/pipelines/configuration/sources/otel-metrics-source.md index bea74a96d3..0e8d377828 100644 --- a/_data-prepper/pipelines/configuration/sources/otel-metrics-source.md +++ b/_data-prepper/pipelines/configuration/sources/otel-metrics-source.md @@ -3,7 +3,7 @@ layout: default title: otel_metrics_source parent: Sources grand_parent: Pipelines -nav_order: 10 +nav_order: 70 --- # otel_metrics_source diff --git a/_data-prepper/pipelines/configuration/sources/otel-trace-source.md b/_data-prepper/pipelines/configuration/sources/otel-trace-source.md index 1be7864c33..de45a5de63 100644 --- a/_data-prepper/pipelines/configuration/sources/otel-trace-source.md +++ b/_data-prepper/pipelines/configuration/sources/otel-trace-source.md @@ -3,7 +3,7 @@ layout: default title: otel_trace_source parent: Sources grand_parent: Pipelines -nav_order: 15 +nav_order: 80 redirect_from: - /data-prepper/pipelines/configuration/sources/otel-trace/ --- diff --git a/_data-prepper/pipelines/configuration/sources/pipeline.md b/_data-prepper/pipelines/configuration/sources/pipeline.md new file mode 100644 index 0000000000..6ba025bd18 --- /dev/null +++ b/_data-prepper/pipelines/configuration/sources/pipeline.md @@ -0,0 +1,31 @@ +--- +layout: default +title: pipeline +parent: Sources +grand_parent: Pipelines +nav_order: 90 +--- + +# pipeline + +Use the `pipeline` sink to read from another pipeline. + +## Configuration options + +The `pipeline` sink supports the following configuration options. + +| Option | Required | Type | Description | +|:-------|:---------|:-------|:---------------------------------------| +| `name` | Yes | String | The name of the pipeline to read from. | + +## Usage + +The following example configures a `pipeline` sink named `sample-pipeline` that reads from a pipeline named `movies`: + +```yaml +sample-pipeline: + source: + - pipeline: + name: "movies" +``` +{% include copy.html %} diff --git a/_data-prepper/pipelines/configuration/sources/s3.md b/_data-prepper/pipelines/configuration/sources/s3.md index 5a7d9986e5..db92718a36 100644 --- a/_data-prepper/pipelines/configuration/sources/s3.md +++ b/_data-prepper/pipelines/configuration/sources/s3.md @@ -3,7 +3,7 @@ layout: default title: s3 source parent: Sources grand_parent: Pipelines -nav_order: 20 +nav_order: 100 --- # s3 source diff --git a/_data-prepper/pipelines/configuration/sources/sources.md b/_data-prepper/pipelines/configuration/sources/sources.md index 811b161e16..682f215517 100644 --- a/_data-prepper/pipelines/configuration/sources/sources.md +++ b/_data-prepper/pipelines/configuration/sources/sources.md @@ -3,7 +3,7 @@ layout: default title: Sources parent: Pipelines has_children: true -nav_order: 20 +nav_order: 110 --- # Sources From 41b1b069f1ac51540743e60f48bc080c91017ce7 Mon Sep 17 00:00:00 2001 From: AntonEliatra Date: Fri, 13 Sep 2024 14:24:26 +0100 Subject: [PATCH 050/111] Add Cjk width token filter (#7917) * adding token filter page for cjk width #7875 Signed-off-by: AntonEliatra * adding details to the page Signed-off-by: AntonEliatra * adding details to the page Signed-off-by: AntonEliatra * Updating details as per comments Signed-off-by: AntonEliatra * Update cjk-width.md Signed-off-by: AntonEliatra * Apply suggestions from code review Co-authored-by: kolchfa-aws <105444904+kolchfa-aws@users.noreply.github.com> Signed-off-by: AntonEliatra * Update cjk-width.md Signed-off-by: AntonEliatra * Update cjk-width.md Signed-off-by: AntonEliatra * Apply suggestions from code review Signed-off-by: kolchfa-aws <105444904+kolchfa-aws@users.noreply.github.com> * Apply suggestions from code review Co-authored-by: Nathan Bower Signed-off-by: AntonEliatra --------- Signed-off-by: AntonEliatra Signed-off-by: kolchfa-aws <105444904+kolchfa-aws@users.noreply.github.com> Co-authored-by: kolchfa-aws <105444904+kolchfa-aws@users.noreply.github.com> Co-authored-by: Nathan Bower --- _analyzers/token-filters/cjk-width.md | 96 +++++++++++++++++++++++++++ _analyzers/token-filters/index.md | 2 +- 2 files changed, 97 insertions(+), 1 deletion(-) create mode 100644 _analyzers/token-filters/cjk-width.md diff --git a/_analyzers/token-filters/cjk-width.md b/_analyzers/token-filters/cjk-width.md new file mode 100644 index 0000000000..4960729cd1 --- /dev/null +++ b/_analyzers/token-filters/cjk-width.md @@ -0,0 +1,96 @@ +--- +layout: default +title: CJK width +parent: Token filters +nav_order: 40 +--- + +# CJK width token filter + +The `cjk_width` token filter normalizes Chinese, Japanese, and Korean (CJK) tokens by converting full-width ASCII characters to their standard (half-width) ASCII equivalents and half-width katakana characters to their full-width equivalents. + +### Converting full-width ASCII characters + +In CJK texts, ASCII characters (such as letters and numbers) can appear in full-width form, occupying the space of two half-width characters. Full-width ASCII characters are typically used in East Asian typography for alignment with the width of CJK characters. However, for the purposes of indexing and searching, these full-width characters need to be normalized to their standard (half-width) ASCII equivalents. + +The following example illustrates ASCII character normalization: + +``` + Full-Width: ABCDE 12345 + Normalized (half-width): ABCDE 12345 +``` + +### Converting half-width katakana characters + +The `cjk_width` token filter converts half-width katakana characters to their full-width counterparts, which are the standard form used in Japanese text. This normalization, illustrated in the following example, is important for consistency in text processing and searching: + + +``` + Half-Width katakana: カタカナ + Normalized (full-width) katakana: カタカナ +``` + +## Example + +The following example request creates a new index named `cjk_width_example_index` and defines an analyzer with the `cjk_width` filter: + +```json +PUT /cjk_width_example_index +{ + "settings": { + "analysis": { + "analyzer": { + "cjk_width_analyzer": { + "type": "custom", + "tokenizer": "standard", + "filter": ["cjk_width"] + } + } + } + } +} +``` +{% include copy-curl.html %} + +## Generated tokens + +Use the following request to examine the tokens generated using the analyzer: + +```json +POST /cjk_width_example_index/_analyze +{ + "analyzer": "cjk_width_analyzer", + "text": "Tokyo 2024 カタカナ" +} +``` +{% include copy-curl.html %} + +The response contains the generated tokens: + +```json +{ + "tokens": [ + { + "token": "Tokyo", + "start_offset": 0, + "end_offset": 5, + "type": "", + "position": 0 + }, + { + "token": "2024", + "start_offset": 6, + "end_offset": 10, + "type": "", + "position": 1 + }, + { + "token": "カタカナ", + "start_offset": 11, + "end_offset": 15, + "type": "", + "position": 2 + } + ] +} +``` diff --git a/_analyzers/token-filters/index.md b/_analyzers/token-filters/index.md index a9b621d5ab..86925123b8 100644 --- a/_analyzers/token-filters/index.md +++ b/_analyzers/token-filters/index.md @@ -16,7 +16,7 @@ Token filter | Underlying Lucene token filter| Description [`apostrophe`]({{site.url}}{{site.baseurl}}/analyzers/token-filters/apostrophe/) | [ApostropheFilter](https://lucene.apache.org/core/9_10_0/analysis/common/org/apache/lucene/analysis/tr/ApostropheFilter.html) | In each token containing an apostrophe, the `apostrophe` token filter removes the apostrophe itself and all characters following it. [`asciifolding`]({{site.url}}{{site.baseurl}}/analyzers/token-filters/asciifolding/) | [ASCIIFoldingFilter](https://lucene.apache.org/core/9_10_0/analysis/common/org/apache/lucene/analysis/miscellaneous/ASCIIFoldingFilter.html) | Converts alphabetic, numeric, and symbolic characters. `cjk_bigram` | [CJKBigramFilter](https://lucene.apache.org/core/9_10_0/analysis/common/org/apache/lucene/analysis/cjk/CJKBigramFilter.html) | Forms bigrams of Chinese, Japanese, and Korean (CJK) tokens. -`cjk_width` | [CJKWidthFilter](https://lucene.apache.org/core/9_10_0/analysis/common/org/apache/lucene/analysis/cjk/CJKWidthFilter.html) | Normalizes Chinese, Japanese, and Korean (CJK) tokens according to the following rules:
- Folds full-width ASCII character variants into the equivalent basic Latin characters.
- Folds half-width Katakana character variants into the equivalent Kana characters. +[`cjk_width`]({{site.url}}{{site.baseurl}}/analyzers/token-filters/cjk-width/) | [CJKWidthFilter](https://lucene.apache.org/core/9_10_0/analysis/common/org/apache/lucene/analysis/cjk/CJKWidthFilter.html) | Normalizes Chinese, Japanese, and Korean (CJK) tokens according to the following rules:
- Folds full-width ASCII character variants into their equivalent basic Latin characters.
- Folds half-width katakana character variants into their equivalent kana characters. `classic` | [ClassicFilter](https://lucene.apache.org/core/9_10_0/analysis/common/org/apache/lucene/analysis/classic/ClassicFilter.html) | Performs optional post-processing on the tokens generated by the classic tokenizer. Removes possessives (`'s`) and removes `.` from acronyms. `common_grams` | [CommonGramsFilter](https://lucene.apache.org/core/9_10_0/analysis/common/org/apache/lucene/analysis/commongrams/CommonGramsFilter.html) | Generates bigrams for a list of frequently occurring terms. The output contains both single terms and bigrams. `conditional` | [ConditionalTokenFilter](https://lucene.apache.org/core/9_10_0/analysis/common/org/apache/lucene/analysis/miscellaneous/ConditionalTokenFilter.html) | Applies an ordered list of token filters to tokens that match the conditions provided in a script. From 9f5fe326458af0f062eba6255a5e0118c10eb723 Mon Sep 17 00:00:00 2001 From: AntonEliatra Date: Fri, 13 Sep 2024 14:28:12 +0100 Subject: [PATCH 051/111] Add Cjk bigram token filter page (#7916) * adding cjk bigram token filter page #7874 Signed-off-by: AntonEliatra * updating as per PR comments Signed-off-by: AntonEliatra * updating the heading Signed-off-by: AntonEliatra * Updating details as per comments Signed-off-by: AntonEliatra * Update cjk-bigram.md Signed-off-by: AntonEliatra * updating the configs Signed-off-by: AntonEliatra * updating examples Signed-off-by: AntonEliatra * Apply suggestions from code review Co-authored-by: kolchfa-aws <105444904+kolchfa-aws@users.noreply.github.com> Signed-off-by: AntonEliatra * updating as per comments Signed-off-by: Anton Rubin * Update cjk-bigram.md Signed-off-by: AntonEliatra * Apply suggestions from code review Co-authored-by: Nathan Bower Signed-off-by: AntonEliatra --------- Signed-off-by: AntonEliatra Signed-off-by: Anton Rubin Co-authored-by: kolchfa-aws <105444904+kolchfa-aws@users.noreply.github.com> Co-authored-by: Nathan Bower --- _analyzers/token-filters/cjk-bigram.md | 160 +++++++++++++++++++++++++ 1 file changed, 160 insertions(+) create mode 100644 _analyzers/token-filters/cjk-bigram.md diff --git a/_analyzers/token-filters/cjk-bigram.md b/_analyzers/token-filters/cjk-bigram.md new file mode 100644 index 0000000000..ab21549c47 --- /dev/null +++ b/_analyzers/token-filters/cjk-bigram.md @@ -0,0 +1,160 @@ +--- +layout: default +title: CJK bigram +parent: Token filters +nav_order: 30 +--- + +# CJK bigram token filter + +The `cjk_bigram` token filter is designed specifically for processing East Asian languages, such as Chinese, Japanese, and Korean (CJK), which typically don't use spaces to separate words. A bigram is a sequence of two adjacent elements in a string of tokens, which can be characters or words. For CJK languages, bigrams help approximate word boundaries and capture significant character pairs that can convey meaning. + + +## Parameters + +The `cjk_bigram` token filter can be configured with two parameters: `ignore_scripts`and `output_unigrams`. + +### `ignore_scripts` + +The `cjk-bigram` token filter ignores all non-CJK scripts (writing systems like Latin or Cyrillic) and tokenizes only CJK text into bigrams. Use this option to specify CJK scripts to be ignored. This option takes the following valid values: + +- `han`: The `han` script processes Han characters. [Han characters](https://simple.wikipedia.org/wiki/Chinese_characters) are logograms used in the written languages of China, Japan, and Korea. The filter can help with text processing tasks like tokenizing, normalizing, or stemming text written in Chinese, Japanese kanji, or Korean Hanja. + +- `hangul`: The `hangul` script processes Hangul characters, which are unique to the Korean language and do not exist in other East Asian scripts. + +- `hiragana`: The `hiragana` script processes hiragana, one of the two syllabaries used in the Japanese writing system. + Hiragana is typically used for native Japanese words, grammatical elements, and certain forms of punctuation. + +- `katakana`: The `katakana` script processes katakana, the other Japanese syllabary. + Katakana is mainly used for foreign loanwords, onomatopoeia, scientific names, and certain Japanese words. + + +### `output_unigrams` + +This option, when set to `true`, outputs both unigrams (single characters) and bigrams. Default is `false`. + +## Example + +The following example request creates a new index named `devanagari_example_index` and defines an analyzer with the `cjk_bigram_filter` filter and `ignored_scripts` parameter set to `katakana`: + +```json +PUT /cjk_bigram_example +{ + "settings": { + "analysis": { + "analyzer": { + "cjk_bigrams_no_katakana": { + "tokenizer": "standard", + "filter": [ "cjk_bigrams_no_katakana_filter" ] + } + }, + "filter": { + "cjk_bigrams_no_katakana_filter": { + "type": "cjk_bigram", + "ignored_scripts": [ + "katakana" + ], + "output_unigrams": true + } + } + } + } +} +``` +{% include copy-curl.html %} + +## Generated tokens + +Use the following request to examine the tokens generated using the analyzer: + +```json +POST /cjk_bigram_example/_analyze +{ + "analyzer": "cjk_bigrams_no_katakana", + "text": "東京タワーに行く" +} +``` +{% include copy-curl.html %} + +Sample text: "東京タワーに行く" + + 東京 (Kanji for "Tokyo") + タワー (Katakana for "Tower") + に行く (Hiragana and Kanji for "go to") + +The response contains the generated tokens: + +```json +{ + "tokens": [ + { + "token": "東", + "start_offset": 0, + "end_offset": 1, + "type": "", + "position": 0 + }, + { + "token": "東京", + "start_offset": 0, + "end_offset": 2, + "type": "", + "position": 0, + "positionLength": 2 + }, + { + "token": "京", + "start_offset": 1, + "end_offset": 2, + "type": "", + "position": 1 + }, + { + "token": "タワー", + "start_offset": 2, + "end_offset": 5, + "type": "", + "position": 2 + }, + { + "token": "に", + "start_offset": 5, + "end_offset": 6, + "type": "", + "position": 3 + }, + { + "token": "に行", + "start_offset": 5, + "end_offset": 7, + "type": "", + "position": 3, + "positionLength": 2 + }, + { + "token": "行", + "start_offset": 6, + "end_offset": 7, + "type": "", + "position": 4 + }, + { + "token": "行く", + "start_offset": 6, + "end_offset": 8, + "type": "", + "position": 4, + "positionLength": 2 + }, + { + "token": "く", + "start_offset": 7, + "end_offset": 8, + "type": "", + "position": 5 + } + ] +} +``` + + From 6a6bd3b86cfb0297a1740f08605f904c4e48e8ac Mon Sep 17 00:00:00 2001 From: yuye-aws Date: Fri, 13 Sep 2024 22:31:41 +0800 Subject: [PATCH 052/111] Clarify doc on text_chunking processor: nested field in input and output (#8229) * update: calrify field Signed-off-by: yuye-aws * Update _ingest-pipelines/processors/text-chunking.md Co-authored-by: kolchfa-aws <105444904+kolchfa-aws@users.noreply.github.com> Signed-off-by: yuye-aws * Update _ingest-pipelines/processors/text-chunking.md Co-authored-by: Nathan Bower Signed-off-by: yuye-aws --------- Signed-off-by: yuye-aws Co-authored-by: kolchfa-aws <105444904+kolchfa-aws@users.noreply.github.com> Co-authored-by: Nathan Bower --- _ingest-pipelines/processors/text-chunking.md | 3 +++ 1 file changed, 3 insertions(+) diff --git a/_ingest-pipelines/processors/text-chunking.md b/_ingest-pipelines/processors/text-chunking.md index 97229d2aaa..4dccca4926 100644 --- a/_ingest-pipelines/processors/text-chunking.md +++ b/_ingest-pipelines/processors/text-chunking.md @@ -42,6 +42,9 @@ The following table lists the required and optional parameters for the `text_chu | `description` | String | Optional | A brief description of the processor. | | `tag` | String | Optional | An identifier tag for the processor. Useful when debugging in order to distinguish between processors of the same type. | +To perform chunking on nested fields, specify `input_field` and `output_field` values as JSON objects. Dot paths of nested fields are not supported. For example, use `"field_map": { "foo": { "bar": "bar_chunk"} }` instead of `"field_map": { "foo.bar": "foo.bar_chunk"}`. +{: .note} + ### Fixed token length algorithm The following table lists the optional parameters for the `fixed_token_length` algorithm. From 39d56c465f0c94d9cd4c757993bcbd4cfa159723 Mon Sep 17 00:00:00 2001 From: yuye-aws Date: Fri, 13 Sep 2024 22:32:55 +0800 Subject: [PATCH 053/111] Update flow framework additional fields in previous_node_inputs (#8233) * update: flow framework previous node inputs Signed-off-by: yuye-aws * update typo Signed-off-by: yuye-aws * update model id step input types Signed-off-by: yuye-aws * update create_tool note Signed-off-by: yuye-aws * reduce redundant step types Signed-off-by: yuye-aws * Update _automating-configurations/workflow-steps.md Co-authored-by: Daniel Widdis Signed-off-by: yuye-aws * optimize doc Signed-off-by: yuye-aws * Update _automating-configurations/workflow-steps.md Co-authored-by: kolchfa-aws <105444904+kolchfa-aws@users.noreply.github.com> Signed-off-by: yuye-aws * Update _automating-configurations/workflow-steps.md Co-authored-by: kolchfa-aws <105444904+kolchfa-aws@users.noreply.github.com> Signed-off-by: yuye-aws * Update _automating-configurations/workflow-steps.md Co-authored-by: kolchfa-aws <105444904+kolchfa-aws@users.noreply.github.com> Signed-off-by: yuye-aws * Update _automating-configurations/workflow-steps.md Co-authored-by: Nathan Bower Signed-off-by: yuye-aws * Update _automating-configurations/workflow-steps.md Co-authored-by: Nathan Bower Signed-off-by: yuye-aws * Update _automating-configurations/workflow-steps.md Co-authored-by: Nathan Bower Signed-off-by: yuye-aws --------- Signed-off-by: yuye-aws Co-authored-by: Daniel Widdis Co-authored-by: kolchfa-aws <105444904+kolchfa-aws@users.noreply.github.com> Co-authored-by: Nathan Bower --- _automating-configurations/workflow-steps.md | 8 +++++--- 1 file changed, 5 insertions(+), 3 deletions(-) diff --git a/_automating-configurations/workflow-steps.md b/_automating-configurations/workflow-steps.md index 43685a957a..0f61874be6 100644 --- a/_automating-configurations/workflow-steps.md +++ b/_automating-configurations/workflow-steps.md @@ -75,9 +75,11 @@ You can include the following additional fields in the `user_inputs` field if th You can include the following additional fields in the `previous_node_inputs` field when indicated. -|Field |Data type |Description | -|--- |--- |--- | -|`model_id` |String |The `model_id` is used as an input for several steps. As a special case for the Register Agent step type, if an `llm.model_id` field is not present in the `user_inputs` and not present in `previous_node_inputs`, the `model_id` field from the previous node may be used as a backup for the model ID. | +| Field |Data type | Description | +|-----------------|--- |------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| +| `model_id` |String | The `model_id` is used as an input for several steps. As a special case for the `register_agent` step type, if an `llm.model_id` field is not present in the `user_inputs` and not present in `previous_node_inputs`, then the `model_id` field from the previous node may be used as a backup for the model ID. The `model_id` will also be included in the `parameters` input of the `create_tool` step for the `MLModelTool`. | +| `agent_id` |String | The `agent_id` is used as an input for several steps. The `agent_id` will also be included in the `parameters` input of the `create_tool` step for the `AgentTool`. | +| `connector_id` |String | The `connector_id` is used as an input for several steps. The `connector_id` will also be included in the `parameters` input of the `create_tool` step for the `ConnectorTool`. | ## Example workflow steps From 9897824129308ce0af39c36c3b533e8ae8e48cdf Mon Sep 17 00:00:00 2001 From: AntonEliatra Date: Fri, 13 Sep 2024 15:48:28 +0100 Subject: [PATCH 054/111] fixing the numbering on the install security page (#8256) Signed-off-by: Anton Rubin Co-authored-by: Melissa Vagi --- .../configuration/disable-enable-security.md | 26 +++++++++---------- 1 file changed, 13 insertions(+), 13 deletions(-) diff --git a/_security/configuration/disable-enable-security.md b/_security/configuration/disable-enable-security.md index 811fd2a69f..38bcc01cdd 100755 --- a/_security/configuration/disable-enable-security.md +++ b/_security/configuration/disable-enable-security.md @@ -155,22 +155,22 @@ Use the following steps to reinstall the plugin: 1. Disable shard allocation and stop all nodes so that shards don't move when the cluster is restarted: - ```json - curl -XPUT "http://localhost:9200/_cluster/settings" -H 'Content-Type: application/json' -d '{ - "transient": { - "cluster.routing.allocation.enable": "none" - } - }' - ``` - {% include copy.html %} + ```json + curl -XPUT "http://localhost:9200/_cluster/settings" -H 'Content-Type: application/json' -d '{ + "transient": { + "cluster.routing.allocation.enable": "none" + } + }' + ``` + {% include copy.html %} 2. Install the Security plugin on all nodes in your cluster using one of the [installation methods]({{site.url}}{{site.baseurl}}/install-and-configure/plugins/#install): - ```bash - bin/opensearch-plugin install opensearch-security - ``` - {% include copy.html %} - + ```bash + bin/opensearch-plugin install opensearch-security + ``` + {% include copy.html %} + 3. Add the necessary configuration to `opensearch.yml` for TLS encryption. See [Configuration]({{site.url}}{{site.baseurl}}/install-and-configure/configuring-opensearch/security-settings/) for information about the settings that need to be configured. From 9f91cfc026db733ae638f92d4d2425fdfc7d6c34 Mon Sep 17 00:00:00 2001 From: Naveen Tatikonda Date: Fri, 13 Sep 2024 10:52:53 -0500 Subject: [PATCH 055/111] Add documentation for Faiss byte vector (#8170) * Add documentation for Faiss byte vector Signed-off-by: Naveen Tatikonda * A couple of rewordings and format changes before tech review Signed-off-by: Fanit Kolchina * Address Review Comments Signed-off-by: Naveen Tatikonda * Doc review Signed-off-by: Fanit Kolchina * Apply suggestions from code review Co-authored-by: Nathan Bower Signed-off-by: kolchfa-aws <105444904+kolchfa-aws@users.noreply.github.com> * Apply suggestions from code review Signed-off-by: kolchfa-aws <105444904+kolchfa-aws@users.noreply.github.com> * Add clarification to memory estimation formulas Signed-off-by: Fanit Kolchina * Typo fix Signed-off-by: Fanit Kolchina --------- Signed-off-by: Naveen Tatikonda Signed-off-by: Fanit Kolchina Signed-off-by: kolchfa-aws <105444904+kolchfa-aws@users.noreply.github.com> Co-authored-by: Fanit Kolchina Co-authored-by: kolchfa-aws <105444904+kolchfa-aws@users.noreply.github.com> Co-authored-by: Nathan Bower --- .../supported-field-types/knn-vector.md | 172 +++++++++++++++++- .../tutorials/semantic-search-byte-vectors.md | 2 +- _search-plugins/knn/approximate-knn.md | 4 +- _search-plugins/knn/knn-index.md | 12 +- _search-plugins/knn/knn-score-script.md | 2 +- .../knn/knn-vector-quantization.md | 16 +- _search-plugins/knn/painless-functions.md | 2 +- _search-plugins/vector-search.md | 2 +- 8 files changed, 185 insertions(+), 27 deletions(-) diff --git a/_field-types/supported-field-types/knn-vector.md b/_field-types/supported-field-types/knn-vector.md index 80c4085485..f0dc831268 100644 --- a/_field-types/supported-field-types/knn-vector.md +++ b/_field-types/supported-field-types/knn-vector.md @@ -85,23 +85,30 @@ However, if you intend to use Painless scripting or a k-NN score script, you onl } ``` -## Lucene byte vector +## Byte vectors -By default, k-NN vectors are `float` vectors, where each dimension is 4 bytes. If you want to save storage space, you can use `byte` vectors with the `lucene` engine. In a `byte` vector, each dimension is a signed 8-bit integer in the [-128, 127] range. +By default, k-NN vectors are `float` vectors, in which each dimension is 4 bytes. If you want to save storage space, you can use `byte` vectors with the `faiss` or `lucene` engine. In a `byte` vector, each dimension is a signed 8-bit integer in the [-128, 127] range. -Byte vectors are supported only for the `lucene` engine. They are not supported for the `nmslib` and `faiss` engines. +Byte vectors are supported only for the `lucene` and `faiss` engines. They are not supported for the `nmslib` engine. {: .note} In [k-NN benchmarking tests](https://github.com/opensearch-project/k-NN/tree/main/benchmarks/perf-tool), the use of `byte` rather than `float` vectors resulted in a significant reduction in storage and memory usage as well as improved indexing throughput and reduced query latency. Additionally, precision on recall was not greatly affected (note that recall can depend on various factors, such as the [quantization technique](#quantization-techniques) and data distribution). When using `byte` vectors, expect some loss of precision in the recall compared to using `float` vectors. Byte vectors are useful in large-scale applications and use cases that prioritize a reduced memory footprint in exchange for a minimal loss of recall. {: .important} - + +When using `byte` vectors with the `faiss` engine, we recommend using [SIMD optimization]({{site.url}}{{site.baseurl}}/search-plugins/knn/knn-index#simd-optimization-for-the-faiss-engine), which helps to significantly reduce search latencies and improve indexing throughput. +{: .important} + Introduced in k-NN plugin version 2.9, the optional `data_type` parameter defines the data type of a vector. The default value of this parameter is `float`. To use a `byte` vector, set the `data_type` parameter to `byte` when creating mappings for an index: - ```json +### Example: HNSW + +The following example creates a byte vector index with the `lucene` engine and `hnsw` algorithm: + +```json PUT test-index { "settings": { @@ -132,7 +139,7 @@ PUT test-index ``` {% include copy-curl.html %} -Then ingest documents as usual. Make sure each dimension in the vector is in the supported [-128, 127] range: +After creating the index, ingest documents as usual. Make sure each dimension in the vector is in the supported [-128, 127] range: ```json PUT test-index/_doc/1 @@ -168,6 +175,157 @@ GET test-index/_search ``` {% include copy-curl.html %} +### Example: IVF + +The `ivf` method requires a training step that creates and trains the model used to initialize the native library index during segment creation. For more information, see [Building a k-NN index from a model]({{site.url}}{{site.baseurl}}/search-plugins/knn/approximate-knn/#building-a-k-nn-index-from-a-model). + +First, create an index that will contain byte vector training data. Specify the `faiss` engine and `ivf` algorithm and make sure that the `dimension` matches the dimension of the model you want to create: + +```json +PUT train-index +{ + "mappings": { + "properties": { + "train-field": { + "type": "knn_vector", + "dimension": 4, + "data_type": "byte" + } + } + } +} +``` +{% include copy-curl.html %} + +First, ingest training data containing byte vectors into the training index: + +```json +PUT _bulk +{ "index": { "_index": "train-index", "_id": "1" } } +{ "train-field": [127, 100, 0, -120] } +{ "index": { "_index": "train-index", "_id": "2" } } +{ "train-field": [2, -128, -10, 50] } +{ "index": { "_index": "train-index", "_id": "3" } } +{ "train-field": [13, -100, 5, 126] } +{ "index": { "_index": "train-index", "_id": "4" } } +{ "train-field": [5, 100, -6, -125] } +``` +{% include copy-curl.html %} + +Then, create and train the model named `byte-vector-model`. The model will be trained using the training data from the `train-field` in the `train-index`. Specify the `byte` data type: + +```json +POST _plugins/_knn/models/byte-vector-model/_train +{ + "training_index": "train-index", + "training_field": "train-field", + "dimension": 4, + "description": "model with byte data", + "data_type": "byte", + "method": { + "name": "ivf", + "engine": "faiss", + "space_type": "l2", + "parameters": { + "nlist": 1, + "nprobes": 1 + } + } +} +``` +{% include copy-curl.html %} + +To check the model training status, call the Get Model API: + +```json +GET _plugins/_knn/models/byte-vector-model?filter_path=state +``` +{% include copy-curl.html %} + +Once the training is complete, the `state` changes to `created`. + +Next, create an index that will initialize its native library indexes using the trained model: + +```json +PUT test-byte-ivf +{ + "settings": { + "index": { + "knn": true + } + }, + "mappings": { + "properties": { + "my_vector": { + "type": "knn_vector", + "model_id": "byte-vector-model" + } + } + } +} +``` +{% include copy-curl.html %} + +Ingest the data containing the byte vectors that you want to search into the created index: + +```json +PUT _bulk?refresh=true +{"index": {"_index": "test-byte-ivf", "_id": "1"}} +{"my_vector": [7, 10, 15, -120]} +{"index": {"_index": "test-byte-ivf", "_id": "2"}} +{"my_vector": [10, -100, 120, -108]} +{"index": {"_index": "test-byte-ivf", "_id": "3"}} +{"my_vector": [1, -2, 5, -50]} +{"index": {"_index": "test-byte-ivf", "_id": "4"}} +{"my_vector": [9, -7, 45, -78]} +{"index": {"_index": "test-byte-ivf", "_id": "5"}} +{"my_vector": [80, -70, 127, -128]} +``` +{% include copy-curl.html %} + +Finally, search the data. Be sure to provide a byte vector in the k-NN vector field: + +```json +GET test-byte-ivf/_search +{ + "size": 2, + "query": { + "knn": { + "my_vector": { + "vector": [100, -120, 50, -45], + "k": 2 + } + } + } +} +``` +{% include copy-curl.html %} + +### Memory estimation + +In the best-case scenario, byte vectors require 25% of the memory required by 32-bit vectors. + +#### HNSW memory estimation + +The memory required for Hierarchical Navigable Small Worlds (HNSW) is estimated to be `1.1 * (dimension + 8 * m)` bytes/vector, where `m` is the maximum number of bidirectional links created for each element during the construction of the graph. + +As an example, assume that you have 1 million vectors with a dimension of 256 and an `m` of 16. The memory requirement can be estimated as follows: + +```r +1.1 * (256 + 8 * 16) * 1,000,000 ~= 0.39 GB +``` + +#### IVF memory estimation + +The memory required for IVF is estimated to be `1.1 * ((dimension * num_vectors) + (4 * nlist * dimension))` bytes/vector, where `nlist` is the number of buckets to partition vectors into. + +As an example, assume that you have 1 million vectors with a dimension of 256 and an `nlist` of 128. The memory requirement can be estimated as follows: + +```r +1.1 * ((256 * 1,000,000) + (4 * 128 * 256)) ~= 0.27 GB +``` + + ### Quantization techniques If your vectors are of the type `float`, you need to first convert them to the `byte` type before ingesting the documents. This conversion is accomplished by _quantizing the dataset_---reducing the precision of its vectors. There are many quantization techniques, such as scalar quantization or product quantization (PQ), which is used in the Faiss engine. The choice of quantization technique depends on the type of data you're using and can affect the accuracy of recall values. The following sections describe the scalar quantization algorithms that were used to quantize the [k-NN benchmarking test](https://github.com/opensearch-project/k-NN/tree/main/benchmarks/perf-tool) data for the [L2](#scalar-quantization-for-the-l2-space-type) and [cosine similarity](#scalar-quantization-for-the-cosine-similarity-space-type) space types. The provided pseudocode is for illustration purposes only. @@ -269,7 +427,7 @@ return Byte(bval) ``` {% include copy.html %} -## Binary k-NN vectors +## Binary vectors You can reduce memory costs by a factor of 32 by switching from float to binary vectors. Using binary vector indexes can lower operational costs while maintaining high recall performance, making large-scale deployment more economical and efficient. diff --git a/_ml-commons-plugin/tutorials/semantic-search-byte-vectors.md b/_ml-commons-plugin/tutorials/semantic-search-byte-vectors.md index 7061d3cb5a..c4cc27f660 100644 --- a/_ml-commons-plugin/tutorials/semantic-search-byte-vectors.md +++ b/_ml-commons-plugin/tutorials/semantic-search-byte-vectors.md @@ -7,7 +7,7 @@ nav_order: 10 # Semantic search using byte-quantized vectors -This tutorial illustrates how to build a semantic search using the [Cohere Embed model](https://docs.cohere.com/reference/embed) and byte-quantized vectors. For more information about using byte-quantized vectors, see [Lucene byte vector]({{site.url}}{{site.baseurl}}/field-types/supported-field-types/knn-vector/#lucene-byte-vector). +This tutorial shows you how to build a semantic search using the [Cohere Embed model](https://docs.cohere.com/reference/embed) and byte-quantized vectors. For more information about using byte-quantized vectors, see [Byte vectors]({{site.url}}{{site.baseurl}}/field-types/supported-field-types/knn-vector/#byte-vectors). The Cohere Embed v3 model supports several `embedding_types`. For this tutorial, you'll use the `INT8` type to encode byte-quantized vectors. diff --git a/_search-plugins/knn/approximate-knn.md b/_search-plugins/knn/approximate-knn.md index e9cff8562f..a73844513e 100644 --- a/_search-plugins/knn/approximate-knn.md +++ b/_search-plugins/knn/approximate-knn.md @@ -322,7 +322,7 @@ To learn more about the radial search feature, see [k-NN radial search]({{site.u ### Using approximate k-NN with binary vectors -To learn more about using binary vectors with k-NN search, see [Binary k-NN vectors]({{site.url}}{{site.baseurl}}/field-types/supported-field-types/knn-vector#binary-k-nn-vectors). +To learn more about using binary vectors with k-NN search, see [Binary k-NN vectors]({{site.url}}{{site.baseurl}}/field-types/supported-field-types/knn-vector#binary-vectors). ## Spaces @@ -346,5 +346,5 @@ The cosine similarity formula does not include the `1 -` prefix. However, becaus With cosine similarity, it is not valid to pass a zero vector (`[0, 0, ...]`) as input. This is because the magnitude of such a vector is 0, which raises a `divide by 0` exception in the corresponding formula. Requests containing the zero vector will be rejected, and a corresponding exception will be thrown. {: .note } -The `hamming` space type is supported for binary vectors in OpenSearch version 2.16 and later. For more information, see [Binary k-NN vectors]({{site.url}}{{site.baseurl}}/field-types/supported-field-types/knn-vector#binary-k-nn-vectors). +The `hamming` space type is supported for binary vectors in OpenSearch version 2.16 and later. For more information, see [Binary k-NN vectors]({{site.url}}{{site.baseurl}}/field-types/supported-field-types/knn-vector#binary-vectors). {: .note} diff --git a/_search-plugins/knn/knn-index.md b/_search-plugins/knn/knn-index.md index a6ffd922eb..5bb7257898 100644 --- a/_search-plugins/knn/knn-index.md +++ b/_search-plugins/knn/knn-index.md @@ -41,13 +41,13 @@ PUT /test-index ``` {% include copy-curl.html %} -## Lucene byte vector +## Byte vectors -Starting with k-NN plugin version 2.9, you can use `byte` vectors with the `lucene` engine to reduce the amount of storage space needed. For more information, see [Lucene byte vector]({{site.url}}{{site.baseurl}}/field-types/supported-field-types/knn-vector#lucene-byte-vector). +Starting with k-NN plugin version 2.17, you can use `byte` vectors with the `faiss` and `lucene` engines to reduce the amount of required memory and storage space. For more information, see [Byte vectors]({{site.url}}{{site.baseurl}}/field-types/supported-field-types/knn-vector#byte-vectors). -## Binary vector +## Binary vectors -Starting with k-NN plugin version 2.16, you can use `binary` vectors with the `faiss` engine to reduce the amount of required storage space. For more information, see [Binary k-NN vectors]({{site.url}}{{site.baseurl}}/field-types/supported-field-types/knn-vector#binary-k-nn-vectors). +Starting with k-NN plugin version 2.16, you can use `binary` vectors with the `faiss` engine to reduce the amount of required storage space. For more information, see [Binary vectors]({{site.url}}{{site.baseurl}}/field-types/supported-field-types/knn-vector#binary-vectors). ## SIMD optimization for the Faiss engine @@ -116,7 +116,7 @@ Method name | Requires training | Supported spaces | Description For hnsw, "innerproduct" is not available when PQ is used. {: .note} -The `hamming` space type is supported for binary vectors in OpenSearch version 2.16 and later. For more information, see [Binary k-NN vectors]({{site.url}}{{site.baseurl}}/field-types/supported-field-types/knn-vector#binary-k-nn-vectors). +The `hamming` space type is supported for binary vectors in OpenSearch version 2.16 and later. For more information, see [Binary k-NN vectors]({{site.url}}{{site.baseurl}}/field-types/supported-field-types/knn-vector#binary-vectors). {: .note} #### HNSW parameters @@ -324,7 +324,7 @@ If you want to use less memory and increase indexing speed as compared to HNSW w If memory is a concern, consider adding a PQ encoder to your HNSW or IVF index. Because PQ is a lossy encoding, query quality will drop. -You can reduce the memory footprint by a factor of 2, with a minimal loss in search quality, by using the [`fp_16` encoder]({{site.url}}{{site.baseurl}}/search-plugins/knn/knn-vector-quantization/#faiss-16-bit-scalar-quantization). If your vector dimensions are within the [-128, 127] byte range, we recommend using the [byte quantizer]({{site.url}}{{site.baseurl}}/field-types/supported-field-types/knn-vector/#lucene-byte-vector) to reduce the memory footprint by a factor of 4. To learn more about vector quantization options, see [k-NN vector quantization]({{site.url}}{{site.baseurl}}/search-plugins/knn/knn-vector-quantization/). +You can reduce the memory footprint by a factor of 2, with a minimal loss in search quality, by using the [`fp_16` encoder]({{site.url}}{{site.baseurl}}/search-plugins/knn/knn-vector-quantization/#faiss-16-bit-scalar-quantization). If your vector dimensions are within the [-128, 127] byte range, we recommend using the [byte quantizer]({{site.url}}{{site.baseurl}}/field-types/supported-field-types/knn-vector/#byte-vectors) to reduce the memory footprint by a factor of 4. To learn more about vector quantization options, see [k-NN vector quantization]({{site.url}}{{site.baseurl}}/search-plugins/knn/knn-vector-quantization/). ### Memory estimation diff --git a/_search-plugins/knn/knn-score-script.md b/_search-plugins/knn/knn-score-script.md index d2fd883e74..a184de2d3d 100644 --- a/_search-plugins/knn/knn-score-script.md +++ b/_search-plugins/knn/knn-score-script.md @@ -302,5 +302,5 @@ Cosine similarity returns a number between -1 and 1, and because OpenSearch rele With cosine similarity, it is not valid to pass a zero vector (`[0, 0, ... ]`) as input. This is because the magnitude of such a vector is 0, which raises a `divide by 0` exception in the corresponding formula. Requests containing the zero vector will be rejected, and a corresponding exception will be thrown. {: .note } -The `hamming` space type is supported for binary vectors in OpenSearch version 2.16 and later. For more information, see [Binary k-NN vectors]({{site.url}}{{site.baseurl}}/field-types/supported-field-types/knn-vector#binary-k-nn-vectors). +The `hamming` space type is supported for binary vectors in OpenSearch version 2.16 and later. For more information, see [Binary k-NN vectors]({{site.url}}{{site.baseurl}}/field-types/supported-field-types/knn-vector#binary-vectors). {: .note} diff --git a/_search-plugins/knn/knn-vector-quantization.md b/_search-plugins/knn/knn-vector-quantization.md index 656ce72fd2..5675d57eab 100644 --- a/_search-plugins/knn/knn-vector-quantization.md +++ b/_search-plugins/knn/knn-vector-quantization.md @@ -13,13 +13,13 @@ By default, the k-NN plugin supports the indexing and querying of vectors of typ OpenSearch supports many varieties of quantization. In general, the level of quantization will provide a trade-off between the accuracy of the nearest neighbor search and the size of the memory footprint consumed by the vector search. The supported types include byte vectors, 16-bit scalar quantization, and product quantization (PQ). -## Lucene byte vector +## Byte vectors -Starting with k-NN plugin version 2.9, you can use `byte` vectors with the Lucene engine in order to reduce the amount of required memory. This requires quantizing the vectors outside of OpenSearch before ingesting them into an OpenSearch index. For more information, see [Lucene byte vector]({{site.url}}{{site.baseurl}}/field-types/supported-field-types/knn-vector#lucene-byte-vector). +Starting with version 2.17, the k-NN plugin supports `byte` vectors with the `faiss` and `lucene` engines in order to reduce the amount of required memory. This requires quantizing the vectors outside of OpenSearch before ingesting them into an OpenSearch index. For more information, see [Byte vectors]({{site.url}}{{site.baseurl}}/field-types/supported-field-types/knn-vector#byte-vectors). ## Lucene scalar quantization -Starting with version 2.16, the k-NN plugin supports built-in scalar quantization for the Lucene engine. Unlike the [Lucene byte vector]({{site.url}}{{site.baseurl}}/field-types/supported-field-types/knn-vector#lucene-byte-vector), which requires you to quantize vectors before ingesting the documents, the Lucene scalar quantizer quantizes input vectors in OpenSearch during ingestion. The Lucene scalar quantizer converts 32-bit floating-point input vectors into 7-bit integer vectors in each segment using the minimum and maximum quantiles computed based on the [`confidence_interval`](#confidence-interval) parameter. During search, the query vector is quantized in each segment using the segment's minimum and maximum quantiles in order to compute the distance between the query vector and the segment's quantized input vectors. +Starting with version 2.16, the k-NN plugin supports built-in scalar quantization for the Lucene engine. Unlike [byte vectors]({{site.url}}{{site.baseurl}}/field-types/supported-field-types/knn-vector#byte-vectors), which require you to quantize vectors before ingesting documents, the Lucene scalar quantizer quantizes input vectors in OpenSearch during ingestion. The Lucene scalar quantizer converts 32-bit floating-point input vectors into 7-bit integer vectors in each segment using the minimum and maximum quantiles computed based on the [`confidence_interval`](#confidence-interval) parameter. During search, the query vector is quantized in each segment using the segment's minimum and maximum quantiles in order to compute the distance between the query vector and the segment's quantized input vectors. Quantization can decrease the memory footprint by a factor of 4 in exchange for some loss in recall. Additionally, quantization slightly increases disk usage because it requires storing both the raw input vectors and the quantized vectors. @@ -115,7 +115,7 @@ In the ideal scenario, 7-bit vectors created by the Lucene scalar quantizer use #### HNSW memory estimation -The memory required for the Hierarchical Navigable Small World (HNSW) graph can be estimated as `1.1 * (dimension + 8 * M)` bytes/vector, where `M` is the maximum number of bidirectional links created for each element during the construction of the graph. +The memory required for the Hierarchical Navigable Small World (HNSW) graph can be estimated as `1.1 * (dimension + 8 * m)` bytes/vector, where `m` is the maximum number of bidirectional links created for each element during the construction of the graph. As an example, assume that you have 1 million vectors with a dimension of 256 and M of 16. The memory requirement can be estimated as follows: @@ -250,9 +250,9 @@ In the best-case scenario, 16-bit vectors produced by the Faiss SQfp16 quantizer #### HNSW memory estimation -The memory required for Hierarchical Navigable Small Worlds (HNSW) is estimated to be `1.1 * (2 * dimension + 8 * M)` bytes/vector. +The memory required for Hierarchical Navigable Small Worlds (HNSW) is estimated to be `1.1 * (2 * dimension + 8 * m)` bytes/vector, where `m` is the maximum number of bidirectional links created for each element during the construction of the graph. -As an example, assume that you have 1 million vectors with a dimension of 256 and M of 16. The memory requirement can be estimated as follows: +As an example, assume that you have 1 million vectors with a dimension of 256 and an `m` of 16. The memory requirement can be estimated as follows: ```r 1.1 * (2 * 256 + 8 * 16) * 1,000,000 ~= 0.656 GB @@ -260,9 +260,9 @@ As an example, assume that you have 1 million vectors with a dimension of 256 an #### IVF memory estimation -The memory required for IVF is estimated to be `1.1 * (((2 * dimension) * num_vectors) + (4 * nlist * d))` bytes/vector. +The memory required for IVF is estimated to be `1.1 * (((2 * dimension) * num_vectors) + (4 * nlist * dimension))` bytes/vector, where `nlist` is the number of buckets to partition vectors into. -As an example, assume that you have 1 million vectors with a dimension of 256 and `nlist` of 128. The memory requirement can be estimated as follows: +As an example, assume that you have 1 million vectors with a dimension of 256 and an `nlist` of 128. The memory requirement can be estimated as follows: ```r 1.1 * (((2 * 256) * 1,000,000) + (4 * 128 * 256)) ~= 0.525 GB diff --git a/_search-plugins/knn/painless-functions.md b/_search-plugins/knn/painless-functions.md index cc27776fc4..7a8d9fec7b 100644 --- a/_search-plugins/knn/painless-functions.md +++ b/_search-plugins/knn/painless-functions.md @@ -55,7 +55,7 @@ l1Norm | `float l1Norm (float[] queryVector, doc['vector field'])` | This functi cosineSimilarity | `float cosineSimilarity (float[] queryVector, doc['vector field'])` | Cosine similarity is an inner product of the query vector and document vector normalized to both have a length of 1. If the magnitude of the query vector doesn't change throughout the query, you can pass the magnitude of the query vector to improve performance, instead of calculating the magnitude every time for every filtered document:
`float cosineSimilarity (float[] queryVector, doc['vector field'], float normQueryVector)`
In general, the range of cosine similarity is [-1, 1]. However, in the case of information retrieval, the cosine similarity of two documents ranges from 0 to 1 because the tf-idf statistic can't be negative. Therefore, the k-NN plugin adds 1.0 in order to always yield a positive cosine similarity score. hamming | `float hamming (float[] queryVector, doc['vector field'])` | This function calculates the Hamming distance between a given query vector and document vectors. The Hamming distance is the number of positions at which the corresponding elements are different. The shorter the distance, the more relevant the document is, so this example inverts the return value of the Hamming distance. -The `hamming` space type is supported for binary vectors in OpenSearch version 2.16 and later. For more information, see [Binary k-NN vectors]({{site.url}}{{site.baseurl}}/field-types/supported-field-types/knn-vector#binary-k-nn-vectors). +The `hamming` space type is supported for binary vectors in OpenSearch version 2.16 and later. For more information, see [Binary k-NN vectors]({{site.url}}{{site.baseurl}}/field-types/supported-field-types/knn-vector#binary-vectors). {: .note} ## Constraints diff --git a/_search-plugins/vector-search.md b/_search-plugins/vector-search.md index cd893f4144..cc298786a3 100644 --- a/_search-plugins/vector-search.md +++ b/_search-plugins/vector-search.md @@ -57,7 +57,7 @@ PUT test-index You must designate the field that will store vectors as a [`knn_vector`]({{site.url}}{{site.baseurl}}/field-types/supported-field-types/knn-vector/) field type. OpenSearch supports vectors of up to 16,000 dimensions, each of which is represented as a 32-bit or 16-bit float. -To save storage space, you can use `byte` or `binary` vectors. For more information, see [Lucene byte vector]({{site.url}}{{site.baseurl}}/field-types/supported-field-types/knn-vector#lucene-byte-vector) and [Binary k-NN vectors]({{site.url}}{{site.baseurl}}/field-types/supported-field-types/knn-vector#binary-k-nn-vectors). +To save storage space, you can use `byte` or `binary` vectors. For more information, see [Byte vectors]({{site.url}}{{site.baseurl}}/field-types/supported-field-types/knn-vector#byte-vectors) and [Binary vectors]({{site.url}}{{site.baseurl}}/field-types/supported-field-types/knn-vector#binary-vectors). ### k-NN vector search From 562cb51ff5c9169244e1f0ab42e77befe859dd90 Mon Sep 17 00:00:00 2001 From: Derek Ho Date: Fri, 13 Sep 2024 11:59:01 -0400 Subject: [PATCH 056/111] Add changes for multiple signing keys (#8243) * Add changes for multiple signing keys Signed-off-by: Derek Ho * Remove extra sentence Signed-off-by: Derek Ho * Apply suggestions from code review Signed-off-by: Naarcha-AWS <97990722+Naarcha-AWS@users.noreply.github.com> * Apply suggestions from code review Co-authored-by: Nathan Bower Signed-off-by: Naarcha-AWS <97990722+Naarcha-AWS@users.noreply.github.com> --------- Signed-off-by: Derek Ho Signed-off-by: Naarcha-AWS <97990722+Naarcha-AWS@users.noreply.github.com> Co-authored-by: Naarcha-AWS <97990722+Naarcha-AWS@users.noreply.github.com> Co-authored-by: Nathan Bower --- _security/authentication-backends/jwt.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/_security/authentication-backends/jwt.md b/_security/authentication-backends/jwt.md index 3f28dfecfd..6c7311e7dc 100644 --- a/_security/authentication-backends/jwt.md +++ b/_security/authentication-backends/jwt.md @@ -117,7 +117,7 @@ The following table lists the configuration parameters. Name | Description :--- | :--- -`signing_key` | The signing key to use when verifying the token. If you use a symmetric key algorithm, it is the base64-encoded shared secret. If you use an asymmetric algorithm, it contains the public key. +`signing_key` | The signing key(s) used to verify the token. If you use a symmetric key algorithm, this is the Base64-encoded shared secret. If you use an asymmetric algorithm, the algorithm contains the public key. To pass multiple keys, use a comma-separated list or enumerate the keys. `jwt_header` | The HTTP header in which the token is transmitted. This is typically the `Authorization` header with the `Bearer` schema,`Authorization: Bearer `. Default is `Authorization`. Replacing this field with a value other than `Authorization` prevents the audit log from properly redacting the JWT header from audit messages. It is recommended that users only use `Authorization` when using JWTs with audit logging. `jwt_url_parameter` | If the token is not transmitted in the HTTP header but rather as an URL parameter, define the name of the parameter here. `subject_key` | The key in the JSON payload that stores the username. If not set, the [subject](https://tools.ietf.org/html/rfc7519#section-4.1.2) registered claim is used. From f8c4f5c0665ec13a0dd21bc45503525cd0c40bb6 Mon Sep 17 00:00:00 2001 From: Ruirui Zhang Date: Fri, 13 Sep 2024 09:23:43 -0700 Subject: [PATCH 057/111] Add documentation for workload management (#8228) * add documentation for workload management Signed-off-by: Ruirui Zhang * Update workload-management.md Signed-off-by: Naarcha-AWS <97990722+Naarcha-AWS@users.noreply.github.com> * Apply suggestions from code review Co-authored-by: Nathan Bower Signed-off-by: Naarcha-AWS <97990722+Naarcha-AWS@users.noreply.github.com> --------- Signed-off-by: Naarcha-AWS <97990722+Naarcha-AWS@users.noreply.github.com> Co-authored-by: Naarcha-AWS <97990722+Naarcha-AWS@users.noreply.github.com> Co-authored-by: Nathan Bower --- .../availability-recovery.md | 5 ++ .../workload-management.md | 60 +++++++++++++++++++ 2 files changed, 65 insertions(+) create mode 100644 _tuning-your-cluster/availability-and-recovery/workload-management.md diff --git a/_install-and-configure/configuring-opensearch/availability-recovery.md b/_install-and-configure/configuring-opensearch/availability-recovery.md index d25396a63f..94960ebe0a 100644 --- a/_install-and-configure/configuring-opensearch/availability-recovery.md +++ b/_install-and-configure/configuring-opensearch/availability-recovery.md @@ -16,6 +16,7 @@ Availability and recovery settings include settings for the following: - [Shard indexing backpressure](#shard-indexing-backpressure-settings) - [Segment replication](#segment-replication-settings) - [Cross-cluster replication](#cross-cluster-replication-settings) +- [Workload management](#workload-management-settings) To learn more about static and dynamic settings, see [Configuring OpenSearch]({{site.url}}{{site.baseurl}}/install-and-configure/configuring-opensearch/index/). @@ -70,3 +71,7 @@ For information about segment replication backpressure settings, see [Segment re ## Cross-cluster replication settings For information about cross-cluster replication settings, see [Replication settings]({{site.url}}{{site.baseurl}}/tuning-your-cluster/replication-plugin/settings/). + +## Workload management settings + +Workload management is a mechanism that allows administrators to organize queries into distinct groups. For more information, see [Workload management settings]({{site.url}}{{site.baseurl}}/tuning-your-cluster/availability-and-recovery/workload-management/#workload-management-settings). diff --git a/_tuning-your-cluster/availability-and-recovery/workload-management.md b/_tuning-your-cluster/availability-and-recovery/workload-management.md new file mode 100644 index 0000000000..1c6d9baf46 --- /dev/null +++ b/_tuning-your-cluster/availability-and-recovery/workload-management.md @@ -0,0 +1,60 @@ +--- +layout: default +title: Workload management +nav_order: 60 +has_children: false +parent: Availability and recovery +--- + +# Workload management + +Workload management is a mechanism that allows administrators to organize queries into distinct groups, referred to as _query groups_. These query groups enable admins to limit the cumulative resource usage of each group, ensuring more balanced and fair resource distribution between them. This mechanism provides greater control over resource consumption so that no single group can monopolize cluster resources at the expense of others. + +## Query group + +A query group is a logical construct designed to manage search requests within defined virtual resource limits. The query group service tracks and aggregates resource usage at the node level for different groups, enforcing restrictions to ensure that no group exceeds its allocated resources. Depending on the configured containment mode, the system can limit or terminate tasks that surpass these predefined thresholds. + +Because the definition of a query group is stored in the cluster state, these resource limits are enforced consistently across all nodes in the cluster. + +### Schema + +Query groups use the following schema: + +```json +{ + "_id": "fafjafjkaf9ag8a9ga9g7ag0aagaga", + "resource_limits": { + "memory": 0.4, + "cpu": 0.2 + }, + "resiliency_mode": "enforced", + "name": "analytics", + "updated_at": 4513232415 +} +``` + +### Resource type + +Resource types represent the various system resources that are monitored and managed across different query groups. The following resource types are supported: + +- CPU usage +- JVM memory usage + +### Resiliency mode + +Resiliency mode determines how the assigned resource limits relate to the actual allowed resource usage. The following resiliency modes are supported: + +- **Soft mode** -- The query group can exceed the query group resource limits if the node is not under duress. +- **Enforced mode** -- The query group will never exceed the assigned limits and will be canceled as soon as the limits are exceeded. +- **Monitor mode** -- The query group will not cause any cancellations and will only log the eligible task cancellations. + +## Workload management settings + +Workload management settings allow you to define thresholds for rejecting or canceling tasks based on resource usage. Adjusting the following settings can help to maintain optimal performance and stability within your OpenSearch cluster. + +Setting | Default | Description +:--- | :--- | :--- +`wlm.query_group.node.memory_rejection_threshold` | `0.8` | The memory-based rejection threshold for query groups at the node level. Tasks that exceed this threshold will be rejected. The maximum allowed value is `0.9`. +`wlm.query_group.node.memory_cancellation_threshold` | `0.9` | The memory-based cancellation threshold for query groups at the node level. Tasks that exceed this threshold will be canceled. The maximum allowed value is `0.95`. +`wlm.query_group.node.cpu_rejection_threshold` | `0.8` | The CPU-based rejection threshold for query groups at the node level. Tasks that exceed this threshold will be rejected. The maximum allowed value is `0.9`. +`wlm.query_group.node.cpu_cancellation_threshold` | `0.9` | The CPU-based cancellation threshold for query groups at the node level. Tasks that exceed this threshold will be canceled. The maximum allowed value is `0.95`. From 978c1576f88727738997e18cc5fadd223bc15115 Mon Sep 17 00:00:00 2001 From: Naarcha-AWS <97990722+Naarcha-AWS@users.noreply.github.com> Date: Fri, 13 Sep 2024 11:31:36 -0500 Subject: [PATCH 058/111] Fix workload formatting error (#8250) Signed-off-by: Naarcha-AWS <97990722+Naarcha-AWS@users.noreply.github.com> --- .../user-guide/understanding-workloads/choosing-a-workload.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/_benchmark/user-guide/understanding-workloads/choosing-a-workload.md b/_benchmark/user-guide/understanding-workloads/choosing-a-workload.md index 6016caee0a..ae973a7c62 100644 --- a/_benchmark/user-guide/understanding-workloads/choosing-a-workload.md +++ b/_benchmark/user-guide/understanding-workloads/choosing-a-workload.md @@ -22,8 +22,8 @@ Consider the following criteria when deciding which workload would work best for ## General search clusters -For benchmarking clusters built for general search use cases, start with the `[nyc_taxis]`(https://github.com/opensearch-project/opensearch-benchmark-workloads/tree/main/nyc_taxis) workload. This workload contains data about the rides taken in yellow taxis in New York City in 2015. +For benchmarking clusters built for general search use cases, start with the [nyc_taxis](https://github.com/opensearch-project/opensearch-benchmark-workloads/tree/main/nyc_taxis) workload. This workload contains data about the rides taken in yellow taxis in New York City in 2015. ## Log data -For benchmarking clusters built for indexing and search with log data, use the [`http_logs`](https://github.com/opensearch-project/opensearch-benchmark-workloads/tree/main/http_logs) workload. This workload contains data about the 1998 World Cup. \ No newline at end of file +For benchmarking clusters built for indexing and search with log data, use the [http_logs](https://github.com/opensearch-project/opensearch-benchmark-workloads/tree/main/http_logs) workload. This workload contains data about the 1998 World Cup. From c9bd6fea0a032347df3e35316b6e19b0e33c6f1f Mon Sep 17 00:00:00 2001 From: Amit Galitzky Date: Fri, 13 Sep 2024 12:29:45 -0700 Subject: [PATCH 059/111] Adding documentation for remote index use in AD (#8191) * adding documentation for remote index use in AD Signed-off-by: Amit Galitzky * Update _observing-your-data/ad/index.md Signed-off-by: Melissa Vagi * Update _observing-your-data/ad/index.md Signed-off-by: Melissa Vagi * Update _observing-your-data/ad/index.md Signed-off-by: Melissa Vagi * Update _observing-your-data/ad/index.md Signed-off-by: Melissa Vagi * adding additional security information Signed-off-by: Amit Galitzky * fixing formatting issues Signed-off-by: Amit Galitzky * Update _observing-your-data/ad/index.md Co-authored-by: Nathan Bower Signed-off-by: Melissa Vagi * Update _observing-your-data/ad/index.md Co-authored-by: Nathan Bower Signed-off-by: Melissa Vagi * Update _observing-your-data/ad/index.md Co-authored-by: Nathan Bower Signed-off-by: Melissa Vagi * Update _observing-your-data/ad/security.md Co-authored-by: Nathan Bower Signed-off-by: Melissa Vagi * Update _observing-your-data/ad/security.md Co-authored-by: Nathan Bower Signed-off-by: Melissa Vagi * Update _observing-your-data/ad/security.md Co-authored-by: Nathan Bower Signed-off-by: Melissa Vagi * Update _observing-your-data/ad/security.md Co-authored-by: Nathan Bower Signed-off-by: Melissa Vagi * Update _observing-your-data/ad/security.md Co-authored-by: Nathan Bower Signed-off-by: Melissa Vagi * doc review new content and address editorial review comments Signed-off-by: Melissa Vagi * doc review new content and address editorial review comments Signed-off-by: Melissa Vagi * Update _observing-your-data/ad/security.md Signed-off-by: Melissa Vagi * Update _observing-your-data/ad/security.md Signed-off-by: Melissa Vagi * doc review new content and address editorial review comments Signed-off-by: Melissa Vagi * Update _observing-your-data/ad/security.md Co-authored-by: Nathan Bower Signed-off-by: Melissa Vagi * Update _observing-your-data/ad/security.md Co-authored-by: Nathan Bower Signed-off-by: Melissa Vagi * Update _observing-your-data/ad/security.md Co-authored-by: Nathan Bower Signed-off-by: Melissa Vagi * Update _observing-your-data/ad/security.md Signed-off-by: Melissa Vagi --------- Signed-off-by: Amit Galitzky Signed-off-by: Melissa Vagi Co-authored-by: Melissa Vagi Co-authored-by: Nathan Bower --- _observing-your-data/ad/index.md | 9 +++++-- _observing-your-data/ad/security.md | 41 +++++++++++++++++++++++++++++ 2 files changed, 48 insertions(+), 2 deletions(-) diff --git a/_observing-your-data/ad/index.md b/_observing-your-data/ad/index.md index 5dfa1b8f1a..f565ca6e31 100644 --- a/_observing-your-data/ad/index.md +++ b/_observing-your-data/ad/index.md @@ -10,7 +10,7 @@ redirect_from: # Anomaly detection -An anomaly in OpenSearch is any unusual behavior change in your time-series data. Anomalies can provide valuable insights into your data. For example, for IT infrastructure data, an anomaly in the memory usage metric might help you uncover early signs of a system failure. +An _anomaly_ in OpenSearch is any unusual behavior change in your time-series data. Anomalies can provide valuable insights into your data. For example, for IT infrastructure data, an anomaly in the memory usage metric might help you uncover early signs of a system failure. It can be challenging to discover anomalies using conventional methods such as creating visualizations and dashboards. You could configure an alert based on a static threshold, but this requires prior domain knowledge and isn't adaptive to data that exhibits organic growth or seasonal behavior. @@ -29,9 +29,14 @@ A detector is an individual anomaly detection task. You can define multiple dete 1. Add in the detector details. - Enter a name and brief description. Make sure the name is unique and descriptive enough to help you to identify the purpose of the detector. 1. Specify the data source. - - For **Data source**, choose the index you want to use as the data source. You can optionally use index patterns to choose multiple indexes. + - For **Data source**, choose one or more indexes to use as the data source. Alternatively, you can use an alias or index pattern to choose multiple indexes. + - Detectors can use remote indexes. You can access them using the `cluster-name:index-name` pattern. See [Cross-cluster search]({{site.url}}{{site.baseurl}}/search-plugins/cross-cluster-search/) for more information. Alternatively, you can select clusters and indexes in OpenSearch Dashboards 2.17 or later. To learn about configuring remote indexes with the Security plugin enabled, see [Selecting remote indexes with fine-grained access control]({{site.url}}{{site.baseurl}}/observing-your-data/ad/security/#selecting-remote-indexes-with-fine-grained-access-control) in the [Anomaly detection security](observing-your-data/ad/security/) documentation. - (Optional) For **Data filter**, filter the index you chose as the data source. From the **Data filter** menu, choose **Add data filter**, and then design your filter query by selecting **Field**, **Operator**, and **Value**, or choose **Use query DSL** and add your own JSON filter query. Only [Boolean queries]({{site.url}}{{site.baseurl}}/query-dsl/compound/bool/) are supported for query domain-specific language (DSL). + +To create a cross-cluster detector in OpenSearch Dashboards, the following [permissions]({{site.url}}{{site.baseurl}}/security/access-control/permissions/) are required: `indices:data/read/field_caps`, `indices:admin/resolve/index`, and `cluster:monitor/remote/info`. +{: .note} + #### Example filter using query DSL The query is designed to retrieve documents in which the `urlPath.keyword` field matches one of the following specified values: diff --git a/_observing-your-data/ad/security.md b/_observing-your-data/ad/security.md index 8eeaa3df41..e4816cec46 100644 --- a/_observing-your-data/ad/security.md +++ b/_observing-your-data/ad/security.md @@ -23,6 +23,11 @@ As an admin user, you can use the Security plugin to assign specific permissions The Security plugin has two built-in roles that cover most anomaly detection use cases: `anomaly_full_access` and `anomaly_read_access`. For descriptions of each, see [Predefined roles]({{site.url}}{{site.baseurl}}/security/access-control/users-roles#predefined-roles). +If you use OpenSearch Dashboards to create your anomaly detectors, you may experience access issues even with `anomaly_full_access`. This issue has been resolved in OpenSearch 2.17, but for earlier versions, the following additional permissions need to be added: + +- `indices:data/read/search` -- You need this permission because the Anomaly Detection plugin needs to search the data source in order to validate whether there is enough data to train the model. +- `indices:admin/mappings/fields/get` and `indices:admin/mappings/fields/get*` -- You need these permissions to validate whether the given data source has a valid timestamp field and categorical field (in the case of creating a high-cardinality detector). + If these roles don't meet your needs, mix and match individual anomaly detection [permissions]({{site.url}}{{site.baseurl}}/security/access-control/permissions/) to suit your use case. Each action corresponds to an operation in the REST API. For example, the `cluster:admin/opensearch/ad/detector/delete` permission lets you delete detectors. ### A note on alerts and fine-grained access control @@ -31,6 +36,42 @@ When a trigger generates an alert, the detector and monitor configurations, the To reduce the chances of unintended users viewing metadata that could describe an index, we recommend that administrators enable role-based access control and keep these kinds of design elements in mind when assigning permissions to the intended group of users. See [Limit access by backend role](#advanced-limit-access-by-backend-role) for details. +### Selecting remote indexes with fine-grained access control + +To use a remote index as a data source for a detector, see the setup steps in [Authentication flow]({{site.url}}{{site.baseurl}}/search-plugins/cross-cluster-search/#authentication-flow) in [Cross-cluster search]({{site.url}}{{site.baseurl}}/search-plugins/cross-cluster-search/). You must use a role that exists in both the remote and local clusters. The remote cluster must map the chosen role to the same username as in the local cluster. + +--- + +#### Example: Create a new user on the local cluster + +1. Create a new user on the local cluster to use for detector creation: + +``` +curl -XPUT -k -u 'admin:' 'https://localhost:9200/_plugins/_security/api/internalusers/anomalyuser' -H 'Content-Type: application/json' -d '{"password":"password"}' +``` +{% include copy-curl.html %} + +2. Map the new user to the `anomaly_full_access` role: + +``` +curl -XPUT -k -u 'admin:' -H 'Content-Type: application/json' 'https://localhost:9200/_plugins/_security/api/rolesmapping/anomaly_full_access' -d '{"users" : ["anomalyuser"]}' +``` +{% include copy-curl.html %} + +3. On the remote cluster, create the same user and map `anomaly_full_access` to that role: + +``` +curl -XPUT -k -u 'admin:' 'https://localhost:9250/_plugins/_security/api/internalusers/anomalyuser' -H 'Content-Type: application/json' -d '{"password":"password"}' +curl -XPUT -k -u 'admin:' -H 'Content-Type: application/json' 'https://localhost:9250/_plugins/_security/api/rolesmapping/anomaly_full_access' -d '{"users" : ["anomalyuser"]}' +``` +{% include copy-curl.html %} + +--- + +### Custom results index + +To use a custom results index, you need additional permissions not included in the default roles provided by the OpenSearch Security plugin. To add these permissions, see [Step 1: Define a detector]({{site.url}}{{site.baseurl}}/observing-your-data/ad/index/#step-1-define-a-detector) in the [Anomaly detection]({{site.url}}{{site.baseurl}}/observing-your-data/ad/index/) documentation. + ## (Advanced) Limit access by backend role Use backend roles to configure fine-grained access to individual detectors based on roles. For example, users of different departments in an organization can view detectors owned by their own department. From 360908be81e71bbafe59462d31b85b6f4100a1fc Mon Sep 17 00:00:00 2001 From: Landon Lengyel Date: Fri, 13 Sep 2024 13:36:24 -0600 Subject: [PATCH 060/111] http adding endpoint to ingest data with (#8067) * Documented the endpoint that is used with this option. Signed-off-by: Landon Lengyel * Update _data-prepper/pipelines/configuration/sources/http.md Signed-off-by: Melissa Vagi * Update _data-prepper/pipelines/configuration/sources/http.md Signed-off-by: Melissa Vagi * Update _data-prepper/pipelines/configuration/sources/http.md Signed-off-by: Melissa Vagi * Update _data-prepper/pipelines/configuration/sources/http.md Signed-off-by: Melissa Vagi * Update http.md Signed-off-by: Melissa Vagi Signed-off-by: Melissa Vagi * Update http.md Signed-off-by: Melissa Vagi --------- Signed-off-by: Melissa Vagi Co-authored-by: Melissa Vagi --- .../pipelines/configuration/sources/http.md | 15 ++++++++++++++- 1 file changed, 14 insertions(+), 1 deletion(-) diff --git a/_data-prepper/pipelines/configuration/sources/http.md b/_data-prepper/pipelines/configuration/sources/http.md index 2171d1ea02..574f49e289 100644 --- a/_data-prepper/pipelines/configuration/sources/http.md +++ b/_data-prepper/pipelines/configuration/sources/http.md @@ -10,7 +10,7 @@ redirect_from: # http -The `http` plugin accepts HTTP requests from clients. Currently, `http` only supports the JSON UTF-8 codec for incoming requests, such as `[{"key1": "value1"}, {"key2": "value2"}]`. The following table describes options you can use to configure the `http` source. +The `http` plugin accepts HTTP requests from clients. The following table describes options you can use to configure the `http` source. Option | Required | Type | Description :--- | :--- | :--- | :--- @@ -36,6 +36,19 @@ aws_region | Conditionally | String | AWS region used by ACM or Amazon S3. Requi Content will be added to this section.---> +## Ingestion + +Clients should send HTTP `POST` requests to the endpoint `/log/ingest`. + +The `http` protocol only supports the JSON UTF-8 codec for incoming requests, for example, `[{"key1": "value1"}, {"key2": "value2"}]`. + +#### Example: Ingest data with cURL + +The following cURL command can be used to ingest data: + +`curl "http://localhost:2021/log/ingest" --data '[{"key1": "value1"}, {"key2": "value2"}]'` +{% include copy-curl.html %} + ## Metrics The `http` source includes the following metrics. From 8c74b88d40def22a9805ed944f1f917e35c3ed14 Mon Sep 17 00:00:00 2001 From: Kaituo Li Date: Fri, 13 Sep 2024 16:50:41 -0700 Subject: [PATCH 061/111] Add documentation for rule-based anomaly detection and imputation (#8202) * Add documentation for rule-based anomaly detection and imputation This PR introduces new documentation for rule-based anomaly detection (AD) and imputation options, providing detailed guidance on configuring these features. It also updates the maximum shingle size information and enhances the documentation for window delay settings. Testing done: - Successfully ran Jekyll build and reviewed the updated documentation to ensure all changes are correctly displayed. Signed-off-by: Kaituo Li * Doc review Signed-off-by: Melissa Vagi * Update _observing-your-data/ad/index.md Signed-off-by: Melissa Vagi * Update _observing-your-data/ad/index.md Signed-off-by: Melissa Vagi * Update _observing-your-data/ad/index.md Signed-off-by: Melissa Vagi * Update _observing-your-data/ad/index.md Signed-off-by: Melissa Vagi * Update _observing-your-data/ad/index.md Signed-off-by: Melissa Vagi * Update _observing-your-data/ad/index.md Signed-off-by: Melissa Vagi * Update _observing-your-data/ad/index.md Signed-off-by: Melissa Vagi * Update _observing-your-data/ad/index.md Signed-off-by: Melissa Vagi * Update _observing-your-data/ad/index.md Signed-off-by: Melissa Vagi * Update _observing-your-data/ad/index.md Signed-off-by: Melissa Vagi * Update _observing-your-data/ad/index.md Signed-off-by: Melissa Vagi * Update _observing-your-data/ad/index.md Signed-off-by: Melissa Vagi * Update _observing-your-data/ad/index.md Signed-off-by: Melissa Vagi * Update _observing-your-data/ad/index.md Signed-off-by: Melissa Vagi * Update _observing-your-data/ad/index.md Signed-off-by: Melissa Vagi * Update _observing-your-data/ad/index.md Signed-off-by: Melissa Vagi * Update _observing-your-data/ad/index.md Signed-off-by: Melissa Vagi * Update _observing-your-data/ad/index.md Signed-off-by: Melissa Vagi * Update _observing-your-data/ad/index.md Signed-off-by: Melissa Vagi * Update _observing-your-data/ad/index.md Signed-off-by: Melissa Vagi * Update _observing-your-data/ad/result-mapping.md Signed-off-by: Melissa Vagi * Update _observing-your-data/ad/index.md Signed-off-by: Melissa Vagi * Update _observing-your-data/ad/index.md Signed-off-by: Melissa Vagi * Update _observing-your-data/ad/index.md Signed-off-by: Melissa Vagi * Update _observing-your-data/ad/index.md Signed-off-by: Melissa Vagi * Update index.md Copy edit documentation Signed-off-by: Melissa Vagi * Update result-mapping.md Doc review complete Signed-off-by: Melissa Vagi * Update _observing-your-data/ad/index.md Signed-off-by: Melissa Vagi * Update _observing-your-data/ad/index.md Signed-off-by: Melissa Vagi * Fix links Signed-off-by: Melissa Vagi * Fix links Signed-off-by: Melissa Vagi * Address editorial feedback Signed-off-by: Melissa Vagi * Address editorial feedback Signed-off-by: Melissa Vagi * Update _observing-your-data/ad/index.md Signed-off-by: Melissa Vagi --------- Signed-off-by: Kaituo Li Signed-off-by: Melissa Vagi Co-authored-by: Melissa Vagi --- .../ad/dashboards-anomaly-detection.md | 4 +- _observing-your-data/ad/index.md | 226 +++++++++++------- _observing-your-data/ad/result-mapping.md | 97 +++++++- 3 files changed, 222 insertions(+), 105 deletions(-) diff --git a/_observing-your-data/ad/dashboards-anomaly-detection.md b/_observing-your-data/ad/dashboards-anomaly-detection.md index 679237094a..ad6fa5950b 100644 --- a/_observing-your-data/ad/dashboards-anomaly-detection.md +++ b/_observing-your-data/ad/dashboards-anomaly-detection.md @@ -18,12 +18,12 @@ You can connect data visualizations to OpenSearch datasets and then create, run, Before getting started, you must have: - Installed OpenSearch and OpenSearch Dashboards version 2.9 or later. See [Installing OpenSearch]({{site.url}}{{site.baseurl}}/install-and-configure/install-opensearch/index/). -- Installed the Anomaly Detection plugin version 2.9 or later. See [Installing OpenSearch plugins]({{site.url}}{{site.baseurl}}/install-and-configure/plugins). +- Installed the Anomaly Detection plugin version 2.9 or later. See [Installing OpenSearch plugins]/({{site.url}}{{site.baseurl}}/install-and-configure/plugins/). - Installed the Anomaly Detection Dashboards plugin version 2.9 or later. See [Managing OpenSearch Dashboards plugins]({{site.url}}{{site.baseurl}}/install-and-configure/install-dashboards/plugins/) to get started. ## General requirements for anomaly detection visualizations -Anomaly detection visualizations are displayed as time-series charts that give you a snapshot of when anomalies have occurred from different anomaly detectors you have configured for the visualization. You can display up to 10 metrics on your chart, and each series can be shown as a line on the chart. Note that only real-time anomalies will be visible on the chart. For more information on real-time and historical anomaly detection, see [Anomaly detection, Step 3: Set up detector jobs]({{site.url}}{{site.baseurl}}/observing-your-data/ad/index/#step-3-set-up-detector-jobs). +Anomaly detection visualizations are displayed as time-series charts that give you a snapshot of when anomalies have occurred from different anomaly detectors you have configured for the visualization. You can display up to 10 metrics on your chart, and each series can be shown as a line on the chart. Note that only real-time anomalies will be visible on the chart. For more information about real-time and historical anomaly detection, see [Anomaly detection, Step 3: Set up detector jobs]({{site.url}}{{site.baseurl}}/observing-your-data/ad/index/#step-3-setting-up-detector-jobs). Keep in mind the following requirements when setting up or creating anomaly detection visualizations. The visualization: diff --git a/_observing-your-data/ad/index.md b/_observing-your-data/ad/index.md index f565ca6e31..657c3c90cb 100644 --- a/_observing-your-data/ad/index.md +++ b/_observing-your-data/ad/index.md @@ -10,21 +10,32 @@ redirect_from: # Anomaly detection -An _anomaly_ in OpenSearch is any unusual behavior change in your time-series data. Anomalies can provide valuable insights into your data. For example, for IT infrastructure data, an anomaly in the memory usage metric might help you uncover early signs of a system failure. +An _anomaly_ in OpenSearch is any unusual behavior change in your time-series data. Anomalies can provide valuable insights into your data. For example, for IT infrastructure data, an anomaly in the memory usage metric can help identify early signs of a system failure. -It can be challenging to discover anomalies using conventional methods such as creating visualizations and dashboards. You could configure an alert based on a static threshold, but this requires prior domain knowledge and isn't adaptive to data that exhibits organic growth or seasonal behavior. +Conventional techniques like visualizations and dashboards can make it difficult to uncover anomalies. Configuring alerts based on static thresholds is possible, but this approach requires prior domain knowledge and may not adapt to data with organic growth or seasonal trends. -Anomaly detection automatically detects anomalies in your OpenSearch data in near real-time using the Random Cut Forest (RCF) algorithm. RCF is an unsupervised machine learning algorithm that models a sketch of your incoming data stream to compute an `anomaly grade` and `confidence score` value for each incoming data point. These values are used to differentiate an anomaly from normal variations. For more information about how RCF works, see [Random Cut Forests](https://www.semanticscholar.org/paper/Robust-Random-Cut-Forest-Based-Anomaly-Detection-on-Guha-Mishra/ecb365ef9b67cd5540cc4c53035a6a7bd88678f9). +Anomaly detection automatically detects anomalies in your OpenSearch data in near real time using the Random Cut Forest (RCF) algorithm. RCF is an unsupervised machine learning algorithm that models a sketch of your incoming data stream to compute an _anomaly grade_ and _confidence score_ value for each incoming data point. These values are used to differentiate an anomaly from normal variations. For more information about how RCF works, see [Robust Random Cut Forest Based Anomaly Detection on Streams](https://www.semanticscholar.org/paper/Robust-Random-Cut-Forest-Based-Anomaly-Detection-on-Guha-Mishra/ecb365ef9b67cd5540cc4c53035a6a7bd88678f9). You can pair the Anomaly Detection plugin with the [Alerting plugin]({{site.url}}{{site.baseurl}}/monitoring-plugins/alerting/) to notify you as soon as an anomaly is detected. +{: .note} + +## Getting started with anomaly detection in OpenSearch Dashboards -To get started, choose **Anomaly Detection** in OpenSearch Dashboards. -To first test with sample streaming data, you can try out one of the preconfigured detectors with one of the sample datasets. +To get started, go to **OpenSearch Dashboards** > **OpenSearch Plugins** > **Anomaly Detection**. ## Step 1: Define a detector -A detector is an individual anomaly detection task. You can define multiple detectors, and all the detectors can run simultaneously, with each analyzing data from different sources. +A _detector_ is an individual anomaly detection task. You can define multiple detectors, and all detectors can run simultaneously, with each analyzing data from different sources. You can define a detector by following these steps: + +1. On the **Anomaly detection** page, select the **Create detector** button. +2. On the **Define detector** page, enter the required information in the **Detector details** pane. +3. In the **Select data** pane, specify the data source by choosing a source from the **Index** dropdown menu. You can choose an index, index patterns, or an alias. +4. (Optional) Filter the data source by selecting **Add data filter** and then entering the conditions for **Field**, **Operator**, and **Value**. Alternatively, you can choose **Use query DSL** and add your JSON filter query. Only [Boolean queries]({{site.url}}{{site.baseurl}}/query-dsl/compound/bool/) are supported for query domain-specific language (DSL). +#### Example: Filtering data using query DSL + +The following example query retrieves documents in which the `urlPath.keyword` field matches any of the specified values: +======= 1. Choose **Create detector**. 1. Add in the detector details. - Enter a name and brief description. Make sure the name is unique and descriptive enough to help you to identify the purpose of the detector. @@ -33,12 +44,8 @@ A detector is an individual anomaly detection task. You can define multiple dete - Detectors can use remote indexes. You can access them using the `cluster-name:index-name` pattern. See [Cross-cluster search]({{site.url}}{{site.baseurl}}/search-plugins/cross-cluster-search/) for more information. Alternatively, you can select clusters and indexes in OpenSearch Dashboards 2.17 or later. To learn about configuring remote indexes with the Security plugin enabled, see [Selecting remote indexes with fine-grained access control]({{site.url}}{{site.baseurl}}/observing-your-data/ad/security/#selecting-remote-indexes-with-fine-grained-access-control) in the [Anomaly detection security](observing-your-data/ad/security/) documentation. - (Optional) For **Data filter**, filter the index you chose as the data source. From the **Data filter** menu, choose **Add data filter**, and then design your filter query by selecting **Field**, **Operator**, and **Value**, or choose **Use query DSL** and add your own JSON filter query. Only [Boolean queries]({{site.url}}{{site.baseurl}}/query-dsl/compound/bool/) are supported for query domain-specific language (DSL). - To create a cross-cluster detector in OpenSearch Dashboards, the following [permissions]({{site.url}}{{site.baseurl}}/security/access-control/permissions/) are required: `indices:data/read/field_caps`, `indices:admin/resolve/index`, and `cluster:monitor/remote/info`. {: .note} - -#### Example filter using query DSL -The query is designed to retrieve documents in which the `urlPath.keyword` field matches one of the following specified values: - /domain/{id}/short - /sub_dir/{id}/short @@ -67,40 +74,38 @@ The query is designed to retrieve documents in which the `urlPath.keyword` field } } ``` + {% include copy-curl.html %} -1. Specify a timestamp. - - Select the **Timestamp field** in your index. -1. Define operation settings. - - For **Operation settings**, define the **Detector interval**, which is the time interval at which the detector collects data. - - The detector aggregates the data in this interval, then feeds the aggregated result into the anomaly detection model. - The shorter you set this interval, the fewer data points the detector aggregates. - The anomaly detection model uses a shingling process, a technique that uses consecutive data points to create a sample for the model. This process needs a certain number of aggregated data points from contiguous intervals. - - - We recommend setting the detector interval based on your actual data. If it's too long it might delay the results, and if it's too short it might miss some data. It also won't have a sufficient number of consecutive data points for the shingle process. +5. In the **Timestamp** pane, select a field from the **Timestamp field** dropdown menu. - - (Optional) To add extra processing time for data collection, specify a **Window delay** value. +6. In the **Operation settings** pane, define the **Detector interval**, which is the interval at which the detector collects data. + - The detector aggregates the data at this interval and then feeds the aggregated result into the anomaly detection model. The shorter the interval, the fewer data points the detector aggregates. The anomaly detection model uses a shingling process, a technique that uses consecutive data points to create a sample for the model. This process requires a certain number of aggregated data points from contiguous intervals. + - You should set the detector interval based on your actual data. If the detector interval is too long, then it might delay the results. If the detector interval is too short, then it might miss some data. The detector interval also will not have a sufficient number of consecutive data points for the shingle process. + - (Optional) To add extra processing time for data collection, specify a **Window delay** value. - This value tells the detector that the data is not ingested into OpenSearch in real time but with a certain delay. Set the window delay to shift the detector interval to account for this delay. - - For example, say the detector interval is 10 minutes and data is ingested into your cluster with a general delay of 1 minute. Assume the detector runs at 2:00. The detector attempts to get the last 10 minutes of data from 1:50 to 2:00, but because of the 1-minute delay, it only gets 9 minutes of data and misses the data from 1:59 to 2:00. Setting the window delay to 1 minute shifts the interval window to 1:49--1:59, so the detector accounts for all 10 minutes of the detector interval time. -1. Specify custom results index. - - The Anomaly Detection plugin allows you to store anomaly detection results in a custom index of your choice. To enable this, select **Enable custom results index** and provide a name for your index, for example, `abc`. The plugin then creates an alias prefixed with `opensearch-ad-plugin-result-` followed by your chosen name, for example, `opensearch-ad-plugin-result-abc`. This alias points to an actual index with a name containing the date and a sequence number, like `opensearch-ad-plugin-result-abc-history-2024.06.12-000002`, where your results are stored. + - For example, the detector interval is 10 minutes and data is ingested into your cluster with a general delay of 1 minute. Assume the detector runs at 2:00. The detector attempts to get the last 10 minutes of data from 1:50 to 2:00, but because of the 1-minute delay, it only gets 9 minutes of data and misses the data from 1:59 to 2:00. Setting the window delay to 1 minute shifts the interval window to 1:49--1:59, so the detector accounts for all 10 minutes of the detector interval time. + - To avoid missing any data, set the **Window delay** to the upper limit of the expected ingestion delay. This ensures that the detector captures all data during its interval, reducing the risk of missing relevant information. While a longer window delay helps capture all data, too long of a window delay can hinder real-time anomaly detection because the detector will look further back in time. Find a balance to maintain both data accuracy and timely detection. - You can use the dash “-” sign to separate the namespace to manage custom results index permissions. For example, if you use `opensearch-ad-plugin-result-financial-us-group1` as the results index, you can create a permission role based on the pattern `opensearch-ad-plugin-result-financial-us-*` to represent the "financial" department at a granular level for the "us" area. +7. Specify a custom results index. + - The Anomaly Detection plugin allows you to store anomaly detection results in a custom index of your choice. Select **Enable custom results index** and provide a name for your index, for example, `abc`. The plugin then creates an alias prefixed with `opensearch-ad-plugin-result-` followed by your chosen name, for example, `opensearch-ad-plugin-result-abc`. This alias points to an actual index with a name containing the date and a sequence number, such as `opensearch-ad-plugin-result-abc-history-2024.06.12-000002`, where your results are stored. + + You can use `-` to separate the namespace to manage custom results index permissions. For example, if you use `opensearch-ad-plugin-result-financial-us-group1` as the results index, you can create a permission role based on the pattern `opensearch-ad-plugin-result-financial-us-*` to represent the `financial` department at a granular level for the `us` group. {: .note } - When the Security plugin (fine-grained access control) is enabled, the default results index becomes a system index and is no longer accessible through the standard Index or Search APIs. To access its content, you must use the Anomaly Detection RESTful API or the dashboard. As a result, you cannot build customized dashboards using the default results index if the Security plugin is enabled. However, you can create a custom results index in order to build customized dashboards. - If the custom index you specify does not exist, the Anomaly Detection plugin will create it when you create the detector and start your real-time or historical analysis. - If the custom index already exists, the plugin will verify that the index mapping matches the required structure for anomaly results. In this case, ensure that the custom index has a valid mapping as defined in the [`anomaly-results.json`](https://github.com/opensearch-project/anomaly-detection/blob/main/src/main/resources/mappings/anomaly-results.json) file. - - To use the custom results index option, you need the following permissions: - - `indices:admin/create` - The Anomaly Detection plugin requires the ability to create and roll over the custom index. - - `indices:admin/aliases` - The Anomaly Detection plugin requires access to create and manage an alias for the custom index. - - `indices:data/write/index` - You need the `write` permission for the Anomaly Detection plugin to write results into the custom index for a single-entity detector. - - `indices:data/read/search` - You need the `search` permission because the Anomaly Detection plugin needs to search custom results indexes to show results on the Anomaly Detection UI. - - `indices:data/write/delete` - Because the detector might generate a large number of anomaly results, you need the `delete` permission to delete old data and save disk space. - - `indices:data/write/bulk*` - You need the `bulk*` permission because the Anomaly Detection plugin uses the bulk API to write results into the custom index. - - Managing the custom results index: - - The anomaly detection dashboard queries all detectors’ results from all custom results indexes. Having too many custom results indexes might impact the performance of the Anomaly Detection plugin. - - You can use [Index State Management]({{site.url}}{{site.baseurl}}/im-plugin/ism/index/) to rollover old results indexes. You can also manually delete or archive any old results indexes. We recommend reusing a custom results index for multiple detectors. - - The Anomaly Detection plugin also provides lifecycle management for custom indexes. It rolls an alias over to a new index when the custom results index meets any of the conditions in the following table. + - To use the custom results index option, you must have the following permissions: + - `indices:admin/create` -- The `create` permission is required in order to create and roll over the custom index. + - `indices:admin/aliases` -- The `aliases` permission is required in order to create and manage an alias for the custom index. + - `indices:data/write/index` -- The `write` permission is required in order to write results into the custom index for a single-entity detector. + - `indices:data/read/search` -- The `search` permission is required in order to search custom results indexes to show results on the Anomaly Detection interface. + - `indices:data/write/delete` -- The detector may generate many anomaly results. The `delete` permission is required in order to delete old data and save disk space. + - `indices:data/write/bulk*` -- The `bulk*` permission is required because the plugin uses the Bulk API to write results into the custom index. + - When managing the custom results index, consider the following: + - The anomaly detection dashboard queries all detector results from all custom results indexes. Having too many custom results indexes can impact the plugin's performance. + - You can use [Index State Management]({{site.url}}{{site.baseurl}}/im-plugin/ism/index/) to roll over old results indexes. You can also manually delete or archive any old results indexes. Reusing a custom results index for multiple detectors is recommended. + - The plugin provides lifecycle management for custom indexes. It rolls over an alias to a new index when the custom results index meets any of the conditions in the following table. Parameter | Description | Type | Unit | Example | Required :--- | :--- |:--- |:--- |:--- |:--- @@ -108,43 +113,52 @@ The query is designed to retrieve documents in which the `urlPath.keyword` field `result_index_min_age` | The minimum index age required for rollover, calculated from its creation time to the current time. | `integer` |`day` | `7` | No `result_index_ttl` | The minimum age required to permanently delete rolled-over indexes. | `integer` | `day` | `60` | No -1. Choose **Next**. +8. Choose **Next**. After you define the detector, the next step is to configure the model. ## Step 2: Configure the model -#### Add features to your detector +1. Add features to your detector. -A feature is the field in your index that you want to check for anomalies. A detector can discover anomalies across one or more features. You must choose an aggregation method for each feature: `average()`, `count()`, `sum()`, `min()`, or `max()`. The aggregation method determines what constitutes an anomaly. +A _feature_ is any field in your index that you want to analyze for anomalies. A detector can discover anomalies across one or more features. You must choose an aggregation method for each feature: `average()`, `count()`, `sum()`, `min()`, or `max()`. The aggregation method determines what constitutes an anomaly. For example, if you choose `min()`, the detector focuses on finding anomalies based on the minimum values of your feature. If you choose `average()`, the detector finds anomalies based on the average values of your feature. -A multi-feature model correlates anomalies across all its features. The [curse of dimensionality](https://en.wikipedia.org/wiki/Curse_of_dimensionality) makes it less likely for multi-feature models to identify smaller anomalies as compared to a single-feature model. Adding more features might negatively impact the [precision and recall](https://en.wikipedia.org/wiki/Precision_and_recall) of a model. A higher proportion of noise in your data might further amplify this negative impact. Selecting the optimal feature set is usually an iterative process. By default, the maximum number of features for a detector is 5. You can adjust this limit with the `plugins.anomaly_detection.max_anomaly_features` setting. -{: .note } +A multi-feature model correlates anomalies across all its features. The [curse of dimensionality](https://en.wikipedia.org/wiki/Curse_of_dimensionality) makes it less likely that multi-feature models will identify smaller anomalies as compared to a single-feature model. Adding more features can negatively impact the [precision and recall](https://en.wikipedia.org/wiki/Precision_and_recall) of a model. A higher proportion of noise in your data can further amplify this negative impact. Selecting the optimal feature set is usually an iterative process. By default, the maximum number of features for a detector is `5`. You can adjust this limit using the `plugins.anomaly_detection.max_anomaly_features` setting. +{: .note} + +### Configuring a model based on an aggregation method To configure an anomaly detection model based on an aggregation method, follow these steps: -1. On the **Configure Model** page, enter the **Feature name** and check **Enable feature**. -1. For **Find anomalies based on**, select **Field Value**. -1. For **aggregation method**, select either **average()**, **count()**, **sum()**, **min()**, or **max()**. -1. For **Field**, select from the available options. +1. On the **Detectors** page, select the desired detector from the list. +2. On the detector's details page, select the **Actions** button to activate the dropdown menu and then select **Edit model configuration**. +3. On the **Edit model configuration** page, select the **Add another feature** button. +4. Enter a name in the **Feature name** field and select the **Enable feature** checkbox. +5. Select **Field value** from the dropdown menu under **Find anomalies based on**. +6. Select the desired aggregation from the dropdown menu under **Aggregation method**. +7. Select the desired field from the options listed in the dropdown menu under **Field**. +8. Select the **Save changes** button. + +### Configuring a model based on a JSON aggregation query To configure an anomaly detection model based on a JSON aggregation query, follow these steps: -1. On the **Configure Model** page, enter the **Feature name** and check **Enable feature**. -1. For **Find anomalies based on**, select **Custom expression**. You will see the JSON editor window open up. -1. Enter your JSON aggregation query in the editor. -For acceptable JSON query syntax, see [OpenSearch Query DSL]({{site.url}}{{site.baseurl}}/opensearch/query-dsl/index/) -{: .note } +1. On the **Edit model configuration** page, select the **Add another feature** button. +2. Enter a name in the **Feature name** field and select the **Enable feature** checkbox. +3. Select **Custom expression** from the dropdown menu under **Find anomalies based on**. The JSON editor window will open. +4. Enter your JSON aggregation query in the editor. +5. Select the **Save changes** button. -#### (Optional) Set category fields for high cardinality +For acceptable JSON query syntax, see [OpenSearch Query DSL]({{site.url}}{{site.baseurl}}/opensearch/query-dsl/index/). +{: .note} -You can categorize anomalies based on a keyword or IP field type. +### Setting categorical fields for high cardinality -The category field categorizes or slices the source time series with a dimension like IP addresses, product IDs, country codes, and so on. This helps to see a granular view of anomalies within each entity of the category field to isolate and debug issues. +You can categorize anomalies based on a keyword or IP field type. You can enable the **Categorical fields** option to categorize, or "slice," the source time series using a dimension, such as an IP address, a product ID, or a country code. This gives you a granular view of anomalies within each entity of the category field to help isolate and debug issues. -To set a category field, choose **Enable a category field** and select a field. You can’t change the category fields after you create the detector. +To set a category field, choose **Enable categorical fields** and select a field. You cannot change the category fields after you create the detector. Only a certain number of unique entities are supported in the category field. Use the following equation to calculate the recommended total number of entities supported in a cluster: @@ -152,7 +166,7 @@ Only a certain number of unique entities are supported in the category field. Us (data nodes * heap size * anomaly detection maximum memory percentage) / (entity model size of a detector) ``` -To get the entity model size of a detector, use the [profile detector API]({{site.url}}{{site.baseurl}}/monitoring-plugins/ad/api/#profile-detector). You can adjust the maximum memory percentage with the `plugins.anomaly_detection.model_max_size_percent` setting. +To get the detector's entity model size, use the [Profile Detector API]({{site.url}}{{site.baseurl}}/monitoring-plugins/ad/api/#profile-detector). You can adjust the maximum memory percentage using the `plugins.anomaly_detection.model_max_size_percent` setting. Consider a cluster with 3 data nodes, each with 8 GB of JVM heap size and the default 10% memory allocation. With an entity model size of 1 MB, the following formula calculates the estimated number of unique entities: @@ -160,81 +174,109 @@ Consider a cluster with 3 data nodes, each with 8 GB of JVM heap size and the de (8096 MB * 0.1 / 1 MB ) * 3 = 2429 ``` -If the actual total number of unique entities is higher than the number that you calculate (in this case, 2,429), the anomaly detector will attempt to model the extra entities. The detector prioritizes entities that occur more often and are more recent. +If the actual total number of unique entities is higher than the number that you calculate (in this case, 2,429), then the anomaly detector attempts to model the extra entities. The detector prioritizes both entities that occur more often and are more recent. -This formula serves as a starting point. Make sure to test it with a representative workload. You can find more information in the [Improving Anomaly Detection: One million entities in one minute](https://opensearch.org/blog/one-million-enitities-in-one-minute/) blog post. +This formula serves as a starting point. Make sure to test it with a representative workload. See the OpenSearch blog post [Improving Anomaly Detection: One million entities in one minute](https://opensearch.org/blog/one-million-enitities-in-one-minute/) for more information. {: .note } -#### (Advanced settings) Set a shingle size +### Setting a shingle size -Set the number of aggregation intervals from your data stream to consider in a detection window. It’s best to choose this value based on your actual data to see which one leads to the best results for your use case. +In the **Advanced settings** pane, you can set the number of data stream aggregation intervals to include in the detection window. Choose this value based on your actual data to find the optimal setting for your use case. To set the shingle size, select **Show** in the **Advanced settings** pane. Enter the desired size in the **intervals** field. -The anomaly detector expects the shingle size to be in the range of 1 and 60. The default shingle size is 8. We recommend that you don't choose 1 unless you have two or more features. Smaller values might increase [recall](https://en.wikipedia.org/wiki/Precision_and_recall) but also false positives. Larger values might be useful for ignoring noise in a signal. +The anomaly detector requires the shingle size to be between 1 and 128. The default is `8`. Use `1` only if you have at least two features. Values of less than `8` may increase [recall](https://en.wikipedia.org/wiki/Precision_and_recall) but also may increase false positives. Values greater than `8` may be useful for ignoring noise in a signal. -#### Preview sample anomalies +### Setting an imputation option -Preview sample anomalies and adjust the feature settings if needed. -For sample previews, the Anomaly Detection plugin selects a small number of data samples---for example, one data point every 30 minutes---and uses interpolation to estimate the remaining data points to approximate the actual feature data. It loads this sample dataset into the detector. The detector uses this sample dataset to generate a sample preview of anomaly results. +In the **Advanced settings** pane, you can set the imputation option. This allows you to manage missing data in your streams. The options include the following: -Examine the sample preview and use it to fine-tune your feature configurations (for example, enable or disable features) to get more accurate results. +- **Ignore Missing Data (Default):** The system continues without considering missing data points, keeping the existing data flow. +- **Fill with Custom Values:** Specify a custom value for each feature to replace missing data points, allowing for targeted imputation tailored to your data. +- **Fill with Zeros:** Replace missing values with zeros. This is ideal when the absence of data indicates a significant event, such as a drop to zero in event counts. +- **Use Previous Values:** Fill gaps with the last observed value to maintain continuity in your time-series data. This method treats missing data as non-anomalous, carrying forward the previous trend. -1. Choose **Preview sample anomalies**. - - If you don't see any sample anomaly result, check the detector interval and make sure you have more than 400 data points for some entities during the preview date range. -1. Choose **Next**. +Using these options can improve recall in anomaly detection. For instance, if you are monitoring for drops in event counts, including both partial and complete drops, then filling missing values with zeros helps detect significant data absences, improving detection recall. + +Be cautious when imputing extensively missing data, as excessive gaps can compromise model accuracy. Quality input is critical---poor data quality leads to poor model performance. The confidence score also decreases when imputations occur. You can check whether a feature value has been imputed using the `feature_imputed` field in the anomaly results index. See [Anomaly result mapping]({{site.url}}{{site.baseurl}}/monitoring-plugins/ad/result-mapping/) for more information. +{: note} + +### Suppressing anomalies with threshold-based rules + +In the **Advanced settings** pane, you can suppress anomalies by setting rules that define acceptable differences between the expected and actual values, either as an absolute value or a relative percentage. This helps reduce false anomalies caused by minor fluctuations, allowing you to focus on significant deviations. -## Step 3: Set up detector jobs +Suppose you want to detect substantial changes in log volume while ignoring small variations that are not meaningful. Without customized settings, the system might generate false alerts for minor changes, making it difficult to identify true anomalies. By setting suppression rules, you can ignore minor deviations and focus on real anomalous patterns. -To start a real-time detector to find anomalies in your data in near real-time, check **Start real-time detector automatically (recommended)**. +To suppress anomalies for deviations of less than 30% from the expected value, you can set the following rules: -Alternatively, if you want to perform historical analysis and find patterns in long historical data windows (weeks or months), check **Run historical analysis detection** and select a date range (at least 128 detection intervals). +``` +Ignore anomalies for feature logVolume when the actual value is no more than 30% above the expected value. +Ignore anomalies for feature logVolume when the actual value is no more than 30% below the expected value. +``` + +Ensure that a feature, for example, `logVolume`, is properly defined in your model. Suppression rules are tied to specific features. +{: .note} + +If you expect that the log volume should differ by at least 10,000 from the expected value before being considered an anomaly, you can set absolute thresholds: + +``` +Ignore anomalies for feature logVolume when the actual value is no more than 10000 above the expected value. +Ignore anomalies for feature logVolume when the actual value is no more than 10000 below the expected value. +``` + +If no custom suppression rules are set, then the system defaults to a filter that ignores anomalies with deviations of less than 20% from the expected value for each enabled feature. -Analyzing historical data helps you get familiar with the Anomaly Detection plugin. You can also evaluate the performance of a detector with historical data to further fine-tune it. +### Previewing sample anomalies -We recommend experimenting with historical analysis with different feature sets and checking the precision before moving on to real-time detectors. +You can preview anomalies based on sample feature input and adjust the feature settings as needed. The Anomaly Detection plugin selects a small number of data samples---for example, 1 data point every 30 minutes---and uses interpolation to estimate the remaining data points to approximate the actual feature data. The sample dataset is loaded into the detector, which then uses the sample dataset to generate a preview of the anomalies. -## Step 4: Review and create +1. Choose **Preview sample anomalies**. + - If sample anomaly results are not displayed, check the detector interval to verify that 400 or more data points are set for the entities during the preview date range. +2. Select the **Next** button. + +## Step 3: Setting up detector jobs + +To start a detector to find anomalies in your data in near real time, select **Start real-time detector automatically (recommended)**. + +Alternatively, if you want to perform historical analysis and find patterns in longer historical data windows (weeks or months), select the **Run historical analysis detection** box and select a date range of at least 128 detection intervals. + +Analyzing historical data can help to familiarize you with the Anomaly Detection plugin. For example, you can evaluate the performance of a detector against historical data in order to fine-tune it. -Review your detector settings and model configurations to make sure that they're valid and then select **Create detector**. +You can experiment with historical analysis by using different feature sets and checking the precision before using real-time detectors. -![Anomaly detection results]({{site.url}}{{site.baseurl}}/images/review_ad.png) +## Step 4: Reviewing detector settings -If you see any validation errors, edit the settings to fix the errors and then return back to this page. +Review your detector settings and model configurations to confirm that they are valid and then select **Create detector**. + +If a validation error occurs, edit the settings to correct the error and return to the detector page. {: .note } -## Step 5: Observe the results +## Step 5: Observing the results -Choose the **Real-time results** or **Historical analysis** tab. For real-time results, you need to wait for some time to see the anomaly results. If the detector interval is 10 minutes, the detector might take more than an hour to start, because its waiting for sufficient data to generate anomalies. +Choose either the **Real-time results** or **Historical analysis** tab. For real-time results, it will take some time to display the anomaly results. For example, if the detector interval is 10 minutes, then the detector may take an hour to initiate because it is waiting for sufficient data to be able to generate anomalies. -A shorter interval means the model passes the shingle process more quickly and starts to generate the anomaly results sooner. -Use the [profile detector]({{site.url}}{{site.baseurl}}/monitoring-plugins/ad/api#profile-detector) operation to make sure you have sufficient data points. +A shorter interval results in the model passing the shingle process more quickly and generating anomaly results sooner. You can use the [profile detector]({{site.url}}{{site.baseurl}}/monitoring-plugins/ad/api#profile-detector) operation to ensure that you have enough data points. -If you see the detector pending in "initialization" for longer than a day, aggregate your existing data using the detector interval to check for any missing data points. If you find a lot of missing data points from the aggregated data, consider increasing the detector interval. +If the detector is pending in "initialization" for longer than 1 day, aggregate your existing data and use the detector interval to check for any missing data points. If you find many missing data points, consider increasing the detector interval. -Choose and drag over the anomaly line chart to zoom in and see a more detailed view of an anomaly. +Click and drag over the anomaly line chart to zoom in and see a detailed view of an anomaly. {: .note } -Analyze anomalies with the following visualizations: +You can analyze anomalies using the following visualizations: -- **Live anomalies** (for real-time results) displays live anomaly results for the last 60 intervals. For example, if the interval is 10, it shows results for the last 600 minutes. The chart refreshes every 30 seconds. -- **Anomaly overview** (for real-time results) / **Anomaly history** (for historical analysis in the **Historical analysis** tab) plots the anomaly grade with the corresponding measure of confidence. This pane includes: +- **Live anomalies** (for real-time results) displays live anomaly results for the last 60 intervals. For example, if the interval is `10`, it shows results for the last 600 minutes. The chart refreshes every 30 seconds. +- **Anomaly overview** (for real-time results) or **Anomaly history** (for historical analysis on the **Historical analysis** tab) plot the anomaly grade with the corresponding measure of confidence. The pane includes: - The number of anomaly occurrences based on the given data-time range. - - The **Average anomaly grade**, a number between 0 and 1 that indicates how anomalous a data point is. An anomaly grade of 0 represents “not an anomaly,” and a non-zero value represents the relative severity of the anomaly. + - The **Average anomaly grade**, a number between 0 and 1 that indicates how anomalous a data point is. An anomaly grade of `0` represents "not an anomaly," and a non-zero value represents the relative severity of the anomaly. - **Confidence** estimate of the probability that the reported anomaly grade matches the expected anomaly grade. Confidence increases as the model observes more data and learns the data behavior and trends. Note that confidence is distinct from model accuracy. - **Last anomaly occurrence** is the time at which the last anomaly occurred. -Underneath **Anomaly overview**/**Anomaly history** are: +Underneath **Anomaly overview** or **Anomaly history** are: - **Feature breakdown** plots the features based on the aggregation method. You can vary the date-time range of the detector. Selecting a point on the feature line chart shows the **Feature output**, the number of times a field appears in your index, and the **Expected value**, a predicted value for the feature output. Where there is no anomaly, the output and expected values are equal. - ![Anomaly detection results]({{site.url}}{{site.baseurl}}/images/feature-contribution-ad.png) - - **Anomaly occurrences** shows the `Start time`, `End time`, `Data confidence`, and `Anomaly grade` for each detected anomaly. Selecting a point on the anomaly line chart shows **Feature Contribution**, the percentage of a feature that contributes to the anomaly -![Anomaly detection results]({{site.url}}{{site.baseurl}}/images/feature-contribution-ad.png) - - If you set the category field, you see an additional **Heat map** chart. The heat map correlates results for anomalous entities. This chart is empty until you select an anomalous entity. You also see the anomaly and feature line chart for the time period of the anomaly (`anomaly_grade` > 0). @@ -254,7 +296,7 @@ To see all the configuration settings for a detector, choose the **Detector conf 1. To make any changes to the detector configuration, or fine tune the time interval to minimize any false positives, go to the **Detector configuration** section and choose **Edit**. - You need to stop real-time and historical analysis to change its configuration. Confirm that you want to stop the detector and proceed. -1. To enable or disable features, in the **Features** section, choose **Edit** and adjust the feature settings as needed. After you make your changes, choose **Save and start detector**. +2. To enable or disable features, in the **Features** section, choose **Edit** and adjust the feature settings as needed. After you make your changes, choose **Save and start detector**. ## Step 8: Manage your detectors diff --git a/_observing-your-data/ad/result-mapping.md b/_observing-your-data/ad/result-mapping.md index 7e1482a013..967b185684 100644 --- a/_observing-your-data/ad/result-mapping.md +++ b/_observing-your-data/ad/result-mapping.md @@ -9,9 +9,7 @@ redirect_from: # Anomaly result mapping -If you enabled custom result index, the anomaly detection plugin stores the results in your own index. - -If the anomaly detector doesn’t detect an anomaly, the result has the following format: +When you select the **Enable custom result index** box on the **Custom result index** pane, the Anomaly Detection plugin will save the results to an index of your choosing. When the anomaly detector does not detect an anomaly, the result format is as follows: ```json { @@ -61,6 +59,7 @@ If the anomaly detector doesn’t detect an anomaly, the result has the followin "threshold": 1.2368549346675202 } ``` +{% include copy-curl.html %} ## Response body fields @@ -80,7 +79,83 @@ Field | Description `model_id` | A unique ID that identifies a model. If a detector is a single-stream detector (with no category field), it has only one model. If a detector is a high-cardinality detector (with one or more category fields), it might have multiple models, one for each entity. `threshold` | One of the criteria for a detector to classify a data point as an anomaly is that its `anomaly_score` must surpass a dynamic threshold. This field records the current threshold. -If an anomaly detector detects an anomaly, the result has the following format: +When the imputation option is enabled, the anomaly results include a `feature_imputed` array showing which features were modified due to missing data. If no features were imputed, then this is excluded. + +In the following example anomaly result output, the `processing_bytes_max` feature was imputed, as shown by the `imputed: true` status: + +```json +{ + "detector_id": "kzcZ43wBgEQAbjDnhzGF", + "schema_version": 5, + "data_start_time": 1635898161367, + "data_end_time": 1635898221367, + "feature_data": [ + { + "feature_id": "processing_bytes_max", + "feature_name": "processing bytes max", + "data": 2322 + }, + { + "feature_id": "processing_bytes_avg", + "feature_name": "processing bytes avg", + "data": 1718.6666666666667 + }, + { + "feature_id": "processing_bytes_min", + "feature_name": "processing bytes min", + "data": 1375 + }, + { + "feature_id": "processing_bytes_sum", + "feature_name": "processing bytes sum", + "data": 5156 + }, + { + "feature_id": "processing_time_max", + "feature_name": "processing time max", + "data": 31198 + } + ], + "execution_start_time": 1635898231577, + "execution_end_time": 1635898231622, + "anomaly_score": 1.8124904404395776, + "anomaly_grade": 0, + "confidence": 0.9802940756605277, + "entity": [ + { + "name": "process_name", + "value": "process_3" + } + ], + "model_id": "kzcZ43wBgEQAbjDnhzGF_entity_process_3", + "threshold": 1.2368549346675202, + "feature_imputed": [ + { + "feature_id": "processing_bytes_max", + "imputed": true + }, + { + "feature_id": "processing_bytes_avg", + "imputed": false + }, + { + "feature_id": "processing_bytes_min", + "imputed": false + }, + { + "feature_id": "processing_bytes_sum", + "imputed": false + }, + { + "feature_id": "processing_time_max", + "imputed": false + } + ] +} +``` +{% include copy-curl.html %} + +When an anomaly is detected, the result is provided in the following format: ```json { @@ -179,24 +254,23 @@ If an anomaly detector detects an anomaly, the result has the following format: "execution_start_time": 1635898427803 } ``` +{% include copy-curl.html %} -You can see the following additional fields: +Note that the result includes the following additional field. Field | Description :--- | :--- `relevant_attribution` | Represents the contribution of each input variable. The sum of the attributions is normalized to 1. `expected_values` | The expected value for each feature. -At times, the detector might detect an anomaly late. -Let's say the detector sees a random mix of the triples {1, 2, 3} and {2, 4, 5} that correspond to `slow weeks` and `busy weeks`, respectively. For example 1, 2, 3, 1, 2, 3, 2, 4, 5, 1, 2, 3, 2, 4, 5, ... and so on. -If the detector comes across a pattern {2, 2, X} and it's yet to see X, the detector infers that the pattern is anomalous, but it can't determine at this point which of the 2's is the cause. If X = 3, then the detector knows it's the first 2 in that unfinished triple, and if X = 5, then it's the second 2. If it's the first 2, then the detector detects the anomaly late. +The detector may be late in detecting an anomaly. For example: The detector observes a sequence of data that alternates between "slow weeks" (represented by the triples {1, 2, 3}) and "busy weeks" (represented by the triples {2, 4, 5}). If the detector comes across a pattern {2, 2, X}, where it has not yet seen the value that X will take, then the detector infers that the pattern is anomalous. However, it cannot determine which 2 is the cause. If X = 3, then the first 2 is the anomaly. If X = 5, then the second 2 is the anomaly. If it is the first 2, then the detector will be late in detecting the anomaly. -If a detector detects an anomaly late, the result has the following additional fields: +When a detector is late in detecting an anomaly, the result includes the following additional fields. Field | Description :--- | :--- -`past_values` | The actual input that triggered an anomaly. If `past_values` is null, the attributions or expected values are from the current input. If `past_values` is not null, the attributions or expected values are from a past input (for example, the previous two steps of the data [1,2,3]). -`approx_anomaly_start_time` | The approximate time of the actual input that triggers an anomaly. This field helps you understand when a detector flags an anomaly. Both single-stream and high-cardinality detectors don't query previous anomaly results because these queries are expensive operations. The cost is especially high for high-cardinality detectors that might have a lot of entities. If the data is not continuous, the accuracy of this field is low and the actual time that the detector detects an anomaly can be earlier. +`past_values` | The actual input that triggered an anomaly. If `past_values` is `null`, then the attributions or expected values are from the current input. If `past_values` is not `null`, then the attributions or expected values are from a past input (for example, the previous two steps of the data [1,2,3]). +`approx_anomaly_start_time` | The approximate time of the actual input that triggered an anomaly. This field helps you understand the time at which a detector flags an anomaly. Both single-stream and high-cardinality detectors do not query previous anomaly results because these queries are costly operations. The cost is especially high for high-cardinality detectors that may have many entities. If the data is not continuous, then the accuracy of this field is low and the actual time at which the detector detects an anomaly can be earlier. ```json { @@ -319,3 +393,4 @@ Field | Description "approx_anomaly_start_time": 1635883620000 } ``` +{% include copy-curl.html %} From 967f257f0bc0d6ac3500f740555726c9bdb5f780 Mon Sep 17 00:00:00 2001 From: John Mazanec Date: Mon, 16 Sep 2024 09:58:34 -0400 Subject: [PATCH 062/111] Add documentation changes for disk-based k-NN (#8246) * Add space type as top level Signed-off-by: John Mazanec * Add new rescore parameter Signed-off-by: John Mazanec * Add new rescore parameter Signed-off-by: John Mazanec * add docs for compression and mode Signed-off-by: John Mazanec * Clean up compression docs Signed-off-by: John Mazanec * Doc review Signed-off-by: Fanit Kolchina * Update a few things Signed-off-by: John Mazanec * Doc review Signed-off-by: Fanit Kolchina * Apply suggestions from code review Co-authored-by: Nathan Bower Signed-off-by: kolchfa-aws <105444904+kolchfa-aws@users.noreply.github.com> --------- Signed-off-by: John Mazanec Signed-off-by: Fanit Kolchina Signed-off-by: kolchfa-aws <105444904+kolchfa-aws@users.noreply.github.com> Co-authored-by: Fanit Kolchina Co-authored-by: kolchfa-aws <105444904+kolchfa-aws@users.noreply.github.com> Co-authored-by: Nathan Bower --- .../styles/Vocab/OpenSearch/Words/accept.txt | 1 + .../supported-field-types/knn-vector.md | 123 ++++++++++++++---- _query-dsl/specialized/neural.md | 2 + _search-plugins/knn/api.md | 7 +- _search-plugins/knn/approximate-knn.md | 72 +++++++++- _search-plugins/knn/knn-index.md | 11 +- .../knn/knn-vector-quantization.md | 8 +- _search-plugins/knn/nested-search-knn.md | 4 +- _search-plugins/knn/performance-tuning.md | 4 +- _search-plugins/knn/radial-search-knn.md | 2 +- _search-plugins/vector-search.md | 4 +- 11 files changed, 190 insertions(+), 48 deletions(-) diff --git a/.github/vale/styles/Vocab/OpenSearch/Words/accept.txt b/.github/vale/styles/Vocab/OpenSearch/Words/accept.txt index d0d1c308eb..ff634490fc 100644 --- a/.github/vale/styles/Vocab/OpenSearch/Words/accept.txt +++ b/.github/vale/styles/Vocab/OpenSearch/Words/accept.txt @@ -105,6 +105,7 @@ p\d{2} [Rr]eprovision(ed|ing)? [Rr]erank(er|ed|ing)? [Rr]epo +[Rr]escor(e|ed|ing)? [Rr]ewriter [Rr]ollout [Rr]ollup diff --git a/_field-types/supported-field-types/knn-vector.md b/_field-types/supported-field-types/knn-vector.md index f0dc831268..4c00b94de8 100644 --- a/_field-types/supported-field-types/knn-vector.md +++ b/_field-types/supported-field-types/knn-vector.md @@ -22,8 +22,7 @@ PUT test-index { "settings": { "index": { - "knn": true, - "knn.algo_param.ef_search": 100 + "knn": true } }, "mappings": { @@ -31,14 +30,10 @@ PUT test-index "my_vector": { "type": "knn_vector", "dimension": 3, + "space_type": "l2", "method": { "name": "hnsw", - "space_type": "l2", - "engine": "lucene", - "parameters": { - "ef_construction": 128, - "m": 24 - } + "engine": "faiss" } } } @@ -47,6 +42,92 @@ PUT test-index ``` {% include copy-curl.html %} +## Vector workload modes + +Vector search involves trade-offs between low-latency and low-cost search. Specify the `mode` mapping parameter of the `knn_vector` type to indicate which search mode you want to prioritize. The `mode` dictates the default values for k-NN parameters. You can further fine-tune your index by overriding the default parameter values in the k-NN field mapping. + +The following modes are currently supported. + +| Mode | Default engine | Description | +|:---|:---|:---| +| `in_memory` (Default) | `nmslib` | Prioritizes low-latency search. This mode uses the `nmslib` engine without any quantization applied. It is configured with the default parameter values for vector search in OpenSearch. | +| `on_disk` | `faiss` | Prioritizes low-cost vector search while maintaining strong recall. By default, the `on_disk` mode uses quantization and rescoring to execute a two-pass approach to retrieve the top neighbors. The `on_disk` mode supports only `float` vector types. | + +To create a k-NN index that uses the `on_disk` mode for low-cost search, send the following request: + +```json +PUT test-index +{ + "settings": { + "index": { + "knn": true + } + }, + "mappings": { + "properties": { + "my_vector": { + "type": "knn_vector", + "dimension": 3, + "space_type": "l2", + "mode": "on_disk" + } + } + } +} +``` +{% include copy-curl.html %} + +## Compression levels + +The `compression_level` mapping parameter selects a quantization encoder that reduces vector memory consumption by the given factor. The following table lists the available `compression_level` values. + +| Compression level | Supported engines | +|:------------------|:-------------------------------| +| `1x` | `faiss`, `lucene`, and `nmslib` | +| `2x` | `faiss` | +| `4x` | `lucene` | +| `8x` | `faiss` | +| `16x` | `faiss` | +| `32x` | `faiss` | + +For example, if a `compression_level` of `32x` is passed for a `float32` index of 768-dimensional vectors, the per-vector memory is reduced from `4 * 768 = 3072` bytes to `3072 / 32 = 846` bytes. Internally, binary quantization (which maps a `float` to a `bit`) may be used to achieve this compression. + +If you set the `compression_level` parameter, then you cannot specify an `encoder` in the `method` mapping. Compression levels greater than `1x` are only supported for `float` vector types. +{: .note} + +The following table lists the default `compression_level` values for the available workload modes. + +| Mode | Default compression level | +|:------------------|:-------------------------------| +| `in_memory` | `1x` | +| `on_disk` | `32x` | + + +To create a vector field with a `compression_level` of `16x`, specify the `compression_level` parameter in the mappings. This parameter overrides the default compression level for the `on_disk` mode from `32x` to `16x`, producing higher recall and accuracy at the expense of a larger memory footprint: + +```json +PUT test-index +{ + "settings": { + "index": { + "knn": true + } + }, + "mappings": { + "properties": { + "my_vector": { + "type": "knn_vector", + "dimension": 3, + "space_type": "l2", + "mode": "on_disk", + "compression_level": "16x" + } + } + } +} +``` +{% include copy-curl.html %} + ## Method definitions [Method definitions]({{site.url}}{{site.baseurl}}/search-plugins/knn/knn-index#method-definitions) are used when the underlying [approximate k-NN]({{site.url}}{{site.baseurl}}/search-plugins/knn/approximate-knn/) algorithm does not require training. For example, the following `knn_vector` field specifies that *nmslib*'s implementation of *hnsw* should be used for approximate k-NN search. During indexing, *nmslib* will build the corresponding *hnsw* segment files. @@ -55,13 +136,13 @@ PUT test-index "my_vector": { "type": "knn_vector", "dimension": 4, + "space_type": "l2", "method": { "name": "hnsw", - "space_type": "l2", "engine": "nmslib", "parameters": { - "ef_construction": 128, - "m": 24 + "ef_construction": 100, + "m": 16 } } } @@ -73,6 +154,7 @@ Model IDs are used when the underlying Approximate k-NN algorithm requires a tra model contains the information needed to initialize the native library segment files. ```json +"my_vector": { "type": "knn_vector", "model_id": "my-model" } @@ -80,6 +162,7 @@ model contains the information needed to initialize the native library segment f However, if you intend to use Painless scripting or a k-NN score script, you only need to pass the dimension. ```json +"my_vector": { "type": "knn_vector", "dimension": 128 } @@ -123,13 +206,13 @@ PUT test-index "type": "knn_vector", "dimension": 3, "data_type": "byte", + "space_type": "l2", "method": { "name": "hnsw", - "space_type": "l2", "engine": "lucene", "parameters": { - "ef_construction": 128, - "m": 24 + "ef_construction": 100, + "m": 16 } } } @@ -465,14 +548,10 @@ PUT /test-binary-hnsw "type": "knn_vector", "dimension": 8, "data_type": "binary", + "space_type": "hamming", "method": { "name": "hnsw", - "space_type": "hamming", - "engine": "faiss", - "parameters": { - "ef_construction": 128, - "m": 24 - } + "engine": "faiss" } } } @@ -695,12 +774,12 @@ POST _plugins/_knn/models/test-binary-model/_train "dimension": 8, "description": "model with binary data", "data_type": "binary", + "space_type": "hamming", "method": { "name": "ivf", "engine": "faiss", - "space_type": "hamming", "parameters": { - "nlist": 1, + "nlist": 16, "nprobes": 1 } } diff --git a/_query-dsl/specialized/neural.md b/_query-dsl/specialized/neural.md index 14b930cdb6..6cd534b87f 100644 --- a/_query-dsl/specialized/neural.md +++ b/_query-dsl/specialized/neural.md @@ -35,6 +35,8 @@ Field | Data type | Required/Optional | Description `min_score` | Float | Optional | The minimum score threshold for the search results. Only one variable, either `k`, `min_score`, or `max_distance`, can be specified. For more information, see [k-NN radial search]({{site.url}}{{site.baseurl}}/search-plugins/knn/radial-search-knn/). `max_distance` | Float | Optional | The maximum distance threshold for the search results. Only one variable, either `k`, `min_score`, or `max_distance`, can be specified. For more information, see [k-NN radial search]({{site.url}}{{site.baseurl}}/search-plugins/knn/radial-search-knn/). `filter` | Object | Optional | A query that can be used to reduce the number of documents considered. For more information about filter usage, see [k-NN search with filters]({{site.url}}{{site.baseurl}}/search-plugins/knn/filter-search-knn/). **Important**: Filter can only be used with the `faiss` or `lucene` engines. +`method_parameters` | Object | Optional | Parameters passed to the k-NN index during search. See [Additional query parameters]({{site.url}}{{site.baseurl}}/search-plugins/knn/approximate-knn/#additional-query-parameters). +`rescore` | Object | Optional | Parameters for configuring rescoring functionality for k-NN indexes built using quantization. See [Rescoring]({{site.url}}{{site.baseurl}}/search-plugins/knn/approximate-knn/#rescoring-quantized-results-using-full-precision). #### Example request diff --git a/_search-plugins/knn/api.md b/_search-plugins/knn/api.md index c7314f7ae2..1a6c970640 100644 --- a/_search-plugins/knn/api.md +++ b/_search-plugins/knn/api.md @@ -234,7 +234,7 @@ Response field | Description `timestamp` | The date and time when the model was created. `description` | A user-provided description of the model. `error` | An error message explaining why the model is in a failed state. -`space_type` | The space type for which this model is trained, for example, Euclidean or cosine. +`space_type` | The space type for which this model is trained, for example, Euclidean or cosine. Note - this value can be set in the top-level of the request as well `dimension` | The dimensionality of the vector space for which this model is designed. `engine` | The native library used to create the model, either `faiss` or `nmslib`. @@ -351,6 +351,7 @@ Request parameter | Description `search_size` | The training data is pulled from the training index using scroll queries. This parameter defines the number of results to return per scroll query. Default is `10000`. Optional. `description` | A user-provided description of the model. Optional. `method` | The configuration of the approximate k-NN method used for search operations. For more information about the available methods, see [k-NN index method definitions]({{site.url}}{{site.baseurl}}/search-plugins/knn/knn-index#method-definitions). The method requires training to be valid. +`space_type` | The space type for which this model is trained, for example, Euclidean or cosine. Note: This value can also be set in the `method` parameter. #### Usage @@ -365,10 +366,10 @@ POST /_plugins/_knn/models/{model_id}/_train?preference={node_id} "max_training_vector_count": 1200, "search_size": 100, "description": "My model", + "space_type": "l2", "method": { "name":"ivf", "engine":"faiss", - "space_type": "l2", "parameters":{ "nlist":128, "encoder":{ @@ -395,10 +396,10 @@ POST /_plugins/_knn/models/_train?preference={node_id} "max_training_vector_count": 1200, "search_size": 100, "description": "My model", + "space_type": "l2", "method": { "name":"ivf", "engine":"faiss", - "space_type": "l2", "parameters":{ "nlist":128, "encoder":{ diff --git a/_search-plugins/knn/approximate-knn.md b/_search-plugins/knn/approximate-knn.md index a73844513e..f8921033e0 100644 --- a/_search-plugins/knn/approximate-knn.md +++ b/_search-plugins/knn/approximate-knn.md @@ -49,9 +49,9 @@ PUT my-knn-index-1 "my_vector1": { "type": "knn_vector", "dimension": 2, + "space_type": "l2", "method": { "name": "hnsw", - "space_type": "l2", "engine": "nmslib", "parameters": { "ef_construction": 128, @@ -62,9 +62,9 @@ PUT my-knn-index-1 "my_vector2": { "type": "knn_vector", "dimension": 4, + "space_type": "innerproduct", "method": { "name": "hnsw", - "space_type": "innerproduct", "engine": "faiss", "parameters": { "ef_construction": 256, @@ -199,10 +199,10 @@ POST /_plugins/_knn/models/my-model/_train "training_field": "train-field", "dimension": 4, "description": "My model description", + "space_type": "l2", "method": { "name": "ivf", "engine": "faiss", - "space_type": "l2", "parameters": { "nlist": 4, "nprobes": 2 @@ -308,6 +308,72 @@ Engine | Notes :--- | :--- `faiss` | If `nprobes` is present in a query, it overrides the value provided when creating the index. +### Rescoring quantized results using full precision + +Quantization can be used to significantly reduce the memory footprint of a k-NN index. For more information about quantization, see [k-NN vector quantization]({{site.url}}{{site.baseurl}}/search-plugins/knn/knn-vector-quantization). Because some vector representation is lost during quantization, the computed distances will be approximate. This causes the overall recall of the search to decrease. + +To improve recall while maintaining the memory savings of quantization, you can use a two-phase search approach. In the first phase, `oversample_factor * k` results are retrieved from an index using quantized vectors and the scores are approximated. In the second phase, the full-precision vectors of those `oversample_factor * k` results are loaded into memory from disk, and scores are recomputed against the full-precision query vector. The results are then reduced to the top k. + +The default rescoring behavior is determined by the `mode` and `compression_level` of the backing k-NN vector field: + +- For `in_memory` mode, no rescoring is applied by default. +- For `on_disk` mode, default rescoring is based on the configured `compression_level`. Each `compression_level` provides a default `oversample_factor`, specified in the following table. + +| Compression level | Default rescore `oversample_factor` | +|:------------------|:----------------------------------| +| `32x` (default) | 3.0 | +| `16x` | 2.0 | +| `8x` | 2.0 | +| `4x` | No default rescoring | +| `2x` | No default rescoring | + +To explicitly apply rescoring, provide the `rescore` parameter in a query on a quantized index and specify the `oversample_factor`: + +```json +GET my-knn-index-1/_search +{ + "size": 2, + "query": { + "knn": { + "target-field": { + "vector": [2, 3, 5, 6], + "k": 2, + "rescore" : { + "oversample_factor": 1.2 + } + } + } + } +} +``` +{% include copy-curl.html %} + +Alternatively, set the `rescore` parameter to `true` to use a default `oversample_factor` of `1.0`: + +```json +GET my-knn-index-1/_search +{ + "size": 2, + "query": { + "knn": { + "target-field": { + "vector": [2, 3, 5, 6], + "k": 2, + "rescore" : true + } + } + } +} +``` +{% include copy-curl.html %} + +The `oversample_factor` is a floating-point number between 1.0 and 100.0, inclusive. The number of results in the first pass is calculated as `oversample_factor * k` and is guaranteed to be between 100 and 10,000, inclusive. If the calculated number of results is smaller than 100, then the number of results is set to 100. If the calculated number of results is greater than 10,000, then the number of results is set to 10,000. + +Rescoring is only supported for the `faiss` engine. + +Rescoring is not needed if quantization is not used because the scores returned are already fully precise. +{: .note} + ### Using approximate k-NN with filters To learn about using filters with k-NN search, see [k-NN search with filters]({{site.url}}{{site.baseurl}}/search-plugins/knn/filter-search-knn/). diff --git a/_search-plugins/knn/knn-index.md b/_search-plugins/knn/knn-index.md index 5bb7257898..15d660ca00 100644 --- a/_search-plugins/knn/knn-index.md +++ b/_search-plugins/knn/knn-index.md @@ -25,9 +25,9 @@ PUT /test-index "my_vector1": { "type": "knn_vector", "dimension": 3, + "space_type": "l2", "method": { "name": "hnsw", - "space_type": "l2", "engine": "lucene", "parameters": { "ef_construction": 128, @@ -83,7 +83,7 @@ A method definition will always contain the name of the method, the space_type t Mapping parameter | Required | Default | Updatable | Description :--- | :--- | :--- | :--- | :--- `name` | true | n/a | false | The identifier for the nearest neighbor method. -`space_type` | false | l2 | false | The vector space used to calculate the distance between vectors. +`space_type` | false | l2 | false | The vector space used to calculate the distance between vectors. Note: This value can also be specified at the top level of the mapping. `engine` | false | nmslib | false | The approximate k-NN library to use for indexing and search. The available libraries are faiss, nmslib, and Lucene. `parameters` | false | null | false | The parameters used for the nearest neighbor method. @@ -168,7 +168,6 @@ An index created in OpenSearch version 2.11 or earlier will still use the old `e "method": { "name":"hnsw", "engine":"lucene", - "space_type": "l2", "parameters":{ "m":2048, "ef_construction": 245 @@ -186,7 +185,6 @@ The following example method definition specifies the `hnsw` method and a `pq` e "method": { "name":"hnsw", "engine":"faiss", - "space_type": "l2", "parameters":{ "encoder":{ "name":"pq", @@ -232,7 +230,6 @@ The following example uses the `ivf` method without specifying an encoder (by d "method": { "name":"ivf", "engine":"faiss", - "space_type": "l2", "parameters":{ "nlist": 4, "nprobes": 2 @@ -246,7 +243,6 @@ The following example uses the `ivf` method with a `pq` encoder: "method": { "name":"ivf", "engine":"faiss", - "space_type": "l2", "parameters":{ "encoder":{ "name":"pq", @@ -265,7 +261,6 @@ The following example uses the `hnsw` method without specifying an encoder (by d "method": { "name":"hnsw", "engine":"faiss", - "space_type": "l2", "parameters":{ "ef_construction": 256, "m": 8 @@ -279,7 +274,6 @@ The following example uses the `hnsw` method with an `sq` encoder of type `fp16` "method": { "name":"hnsw", "engine":"faiss", - "space_type": "l2", "parameters":{ "encoder": { "name": "sq", @@ -300,7 +294,6 @@ The following example uses the `ivf` method with an `sq` encoder of type `fp16`: "method": { "name":"ivf", "engine":"faiss", - "space_type": "l2", "parameters":{ "encoder": { "name": "sq", diff --git a/_search-plugins/knn/knn-vector-quantization.md b/_search-plugins/knn/knn-vector-quantization.md index 5675d57eab..fbdcb4ad2e 100644 --- a/_search-plugins/knn/knn-vector-quantization.md +++ b/_search-plugins/knn/knn-vector-quantization.md @@ -40,10 +40,10 @@ PUT /test-index "my_vector1": { "type": "knn_vector", "dimension": 2, + "space_type": "l2", "method": { "name": "hnsw", "engine": "lucene", - "space_type": "l2", "parameters": { "encoder": { "name": "sq" @@ -85,10 +85,10 @@ PUT /test-index "my_vector1": { "type": "knn_vector", "dimension": 2, + "space_type": "l2", "method": { "name": "hnsw", "engine": "lucene", - "space_type": "l2", "parameters": { "encoder": { "name": "sq", @@ -150,10 +150,10 @@ PUT /test-index "my_vector1": { "type": "knn_vector", "dimension": 3, + "space_type": "l2", "method": { "name": "hnsw", "engine": "faiss", - "space_type": "l2", "parameters": { "encoder": { "name": "sq" @@ -194,10 +194,10 @@ PUT /test-index "my_vector1": { "type": "knn_vector", "dimension": 3, + "space_type": "l2", "method": { "name": "hnsw", "engine": "faiss", - "space_type": "l2", "parameters": { "encoder": { "name": "sq", diff --git a/_search-plugins/knn/nested-search-knn.md b/_search-plugins/knn/nested-search-knn.md index d947ebc6e6..bbba6c9c1e 100644 --- a/_search-plugins/knn/nested-search-knn.md +++ b/_search-plugins/knn/nested-search-knn.md @@ -38,9 +38,9 @@ PUT my-knn-index-1 "my_vector": { "type": "knn_vector", "dimension": 3, + "space_type": "l2", "method": { "name": "hnsw", - "space_type": "l2", "engine": "lucene", "parameters": { "ef_construction": 100, @@ -324,9 +324,9 @@ PUT my-knn-index-1 "my_vector": { "type": "knn_vector", "dimension": 3, + "space_type": "l2", "method": { "name": "hnsw", - "space_type": "l2", "engine": "lucene", "parameters": { "ef_construction": 100, diff --git a/_search-plugins/knn/performance-tuning.md b/_search-plugins/knn/performance-tuning.md index 123b1daef1..77f44dee93 100644 --- a/_search-plugins/knn/performance-tuning.md +++ b/_search-plugins/knn/performance-tuning.md @@ -59,9 +59,9 @@ The `_source` field contains the original JSON document body that was passed at "location": { "type": "knn_vector", "dimension": 2, + "space_type": "l2", "method": { "name": "hnsw", - "space_type": "l2", "engine": "faiss" } } @@ -85,9 +85,9 @@ In OpenSearch 2.15 or later, you can further improve indexing speed and reduce d "location": { "type": "knn_vector", "dimension": 2, + "space_type": "l2", "method": { "name": "hnsw", - "space_type": "l2", "engine": "faiss" } } diff --git a/_search-plugins/knn/radial-search-knn.md b/_search-plugins/knn/radial-search-knn.md index 1a4a223294..e5449a0993 100644 --- a/_search-plugins/knn/radial-search-knn.md +++ b/_search-plugins/knn/radial-search-knn.md @@ -53,9 +53,9 @@ PUT knn-index-test "my_vector": { "type": "knn_vector", "dimension": 2, + "space_type": "l2", "method": { "name": "hnsw", - "space_type": "l2", "engine": "faiss", "parameters": { "ef_construction": 100, diff --git a/_search-plugins/vector-search.md b/_search-plugins/vector-search.md index cc298786a3..f19030bf90 100644 --- a/_search-plugins/vector-search.md +++ b/_search-plugins/vector-search.md @@ -37,9 +37,9 @@ PUT test-index "my_vector1": { "type": "knn_vector", "dimension": 1024, + "space_type": "l2", "method": { "name": "hnsw", - "space_type": "l2", "engine": "nmslib", "parameters": { "ef_construction": 128, @@ -131,9 +131,9 @@ PUT /hotels-index "location": { "type": "knn_vector", "dimension": 2, + "space_type": "l2", "method": { "name": "hnsw", - "space_type": "l2", "engine": "lucene", "parameters": { "ef_construction": 100, From 3abba50414c8beb81789f776a14c8192d18200d1 Mon Sep 17 00:00:00 2001 From: Xun Zhang Date: Mon, 16 Sep 2024 10:47:35 -0700 Subject: [PATCH 063/111] add offline batch ingestion tech doc (#8251) * add offline batch ingestion tech doc Signed-off-by: Xun Zhang * Doc review Signed-off-by: Fanit Kolchina * Apply suggestions from code review Co-authored-by: Nathan Bower Signed-off-by: kolchfa-aws <105444904+kolchfa-aws@users.noreply.github.com> --------- Signed-off-by: Xun Zhang Signed-off-by: Fanit Kolchina Signed-off-by: kolchfa-aws <105444904+kolchfa-aws@users.noreply.github.com> Co-authored-by: Fanit Kolchina Co-authored-by: kolchfa-aws <105444904+kolchfa-aws@users.noreply.github.com> Co-authored-by: Nathan Bower --- .../styles/Vocab/OpenSearch/Words/accept.txt | 3 +- _ml-commons-plugin/api/async-batch-ingest.md | 97 +++++++++ _ml-commons-plugin/api/execute-algorithm.md | 2 +- .../remote-models/async-batch-ingestion.md | 190 ++++++++++++++++++ 4 files changed, 290 insertions(+), 2 deletions(-) create mode 100644 _ml-commons-plugin/api/async-batch-ingest.md create mode 100644 _ml-commons-plugin/remote-models/async-batch-ingestion.md diff --git a/.github/vale/styles/Vocab/OpenSearch/Words/accept.txt b/.github/vale/styles/Vocab/OpenSearch/Words/accept.txt index ff634490fc..c6d129c2c5 100644 --- a/.github/vale/styles/Vocab/OpenSearch/Words/accept.txt +++ b/.github/vale/styles/Vocab/OpenSearch/Words/accept.txt @@ -77,8 +77,9 @@ Levenshtein [Mm]ultivalued [Mm]ultiword [Nn]amespace -[Oo]versamples? +[Oo]ffline [Oo]nboarding +[Oo]versamples? pebibyte p\d{2} [Pp]erformant diff --git a/_ml-commons-plugin/api/async-batch-ingest.md b/_ml-commons-plugin/api/async-batch-ingest.md new file mode 100644 index 0000000000..ace95ba4d4 --- /dev/null +++ b/_ml-commons-plugin/api/async-batch-ingest.md @@ -0,0 +1,97 @@ +--- +layout: default +title: Asynchronous batch ingestion +parent: ML Commons APIs +has_children: false +has_toc: false +nav_order: 35 +--- + +# Asynchronous batch ingestion +**Introduced 2.17** +{: .label .label-purple } + +Use the Asynchronous Batch Ingestion API to ingest data into your OpenSearch cluster from your files on remote file servers, such as Amazon Simple Storage Service (Amazon S3) or OpenAI. For detailed configuration steps, see [Asynchronous batch ingestion]({{site.url}}{{site.baseurl}}/ml-commons-plugin/remote-models/async-batch-ingestion/). + +## Path and HTTP methods + +```json +POST /_plugins/_ml/_batch_ingestion +``` + +#### Request fields + +The following table lists the available request fields. + +Field | Data type | Required/Optional | Description +:--- | :--- | :--- +`index_name`| String | Required | The index name. +`field_map` | Object | Required | Maps fields from the source file to specific fields in an OpenSearch index for ingestion. +`ingest_fields` | Array | Optional | Lists fields from the source file that should be ingested directly into the OpenSearch index without any additional mapping. +`credential` | Object | Required | Contains the authentication information for accessing external data sources, such as Amazon S3 or OpenAI. +`data_source` | Object | Required | Specifies the type and location of the external file(s) from which the data is ingested. +`data_source.type` | String | Required | Specifies the type of the external data source. Valid values are `s3` and `openAI`. +`data_source.source` | Array | Required | Specifies one or more file locations from which the data is ingested. For `s3`, specify the file path to the Amazon S3 bucket (for example, `["s3://offlinebatch/output/sagemaker_batch.json.out"]`). For `openAI`, specify the file IDs for input or output files (for example, `["file-", "file-", "file-"]`). + +## Example request: Ingesting a single file + +```json +POST /_plugins/_ml/_batch_ingestion +{ + "index_name": "my-nlp-index", + "field_map": { + "chapter": "$.content[0]", + "title": "$.content[1]", + "chapter_embedding": "$.SageMakerOutput[0]", + "title_embedding": "$.SageMakerOutput[1]", + "_id": "$.id" + }, + "ingest_fields": ["$.id"], + "credential": { + "region": "us-east-1", + "access_key": "", + "secret_key": "", + "session_token": "" + }, + "data_source": { + "type": "s3", + "source": ["s3://offlinebatch/output/sagemaker_batch.json.out"] + } +} +``` +{% include copy-curl.html %} + +## Example request: Ingesting multiple files + +```json +POST /_plugins/_ml/_batch_ingestion +{ + "index_name": "my-nlp-index-openai", + "field_map": { + "question": "source[1].$.body.input[0]", + "answer": "source[1].$.body.input[1]", + "question_embedding":"source[0].$.response.body.data[0].embedding", + "answer_embedding":"source[0].$.response.body.data[1].embedding", + "_id": ["source[0].$.custom_id", "source[1].$.custom_id"] + }, + "ingest_fields": ["source[2].$.custom_field1", "source[2].$.custom_field2"], + "credential": { + "openAI_key": "" + }, + "data_source": { + "type": "openAI", + "source": ["file-", "file-", "file-"] + } +} +``` +{% include copy-curl.html %} + +## Example response + +```json +{ + "task_id": "cbsPlpEBMHcagzGbOQOx", + "task_type": "BATCH_INGEST", + "status": "CREATED" +} +``` diff --git a/_ml-commons-plugin/api/execute-algorithm.md b/_ml-commons-plugin/api/execute-algorithm.md index 7b06cfefe8..6acd926444 100644 --- a/_ml-commons-plugin/api/execute-algorithm.md +++ b/_ml-commons-plugin/api/execute-algorithm.md @@ -2,7 +2,7 @@ layout: default title: Execute algorithm parent: ML Commons APIs -nav_order: 30 +nav_order: 37 --- # Execute algorithm diff --git a/_ml-commons-plugin/remote-models/async-batch-ingestion.md b/_ml-commons-plugin/remote-models/async-batch-ingestion.md new file mode 100644 index 0000000000..a09c028477 --- /dev/null +++ b/_ml-commons-plugin/remote-models/async-batch-ingestion.md @@ -0,0 +1,190 @@ +--- +layout: default +title: Asynchronous batch ingestion +nav_order: 90 +parent: Connecting to externally hosted models +grand_parent: Integrating ML models +--- + + +# Asynchronous batch ingestion +**Introduced 2.17** +{: .label .label-purple } + +[Batch ingestion]({{site.url}}{{site.baseurl}}/ml-commons-plugin/remote-models/batch-ingestion/) configures an ingest pipeline, which processes documents one by one. For each document, batch ingestion calls an externally hosted model to generate text embeddings from the document text and then ingests the document, including text and embeddings, into an OpenSearch index. + +An alternative to this real-time process, _asynchronous_ batch ingestion, ingests both documents and their embeddings generated outside of OpenSearch and stored on a remote file server, such as Amazon Simple Storage Service (Amazon S3) or OpenAI. Asynchronous ingestion returns a task ID and runs asynchronously to ingest data offline into your k-NN cluster for neural search. You can use asynchronous batch ingestion together with the [Batch Predict API]({{site.url}}{{site.baseurl}}/ml-commons-plugin/api/model-apis/batch-predict/) to perform inference asynchronously. The batch predict operation takes an input file containing documents and calls an externally hosted model to generate embeddings for those documents in an output file. You can then use asynchronous batch ingestion to ingest both the input file containing documents and the output file containing their embeddings into an OpenSearch index. + +As of OpenSearch 2.17, the Asynchronous Batch Ingestion API is supported by Amazon SageMaker, Amazon Bedrock, and OpenAI. +{: .note} + +## Prerequisites + +Before using asynchronous batch ingestion, you must generate text embeddings using a model of your choice and store the output on a file server, such as Amazon S3. For example, you can store the output of a Batch API call to an Amazon SageMaker text embedding model in a file with the Amazon S3 output path `s3://offlinebatch/output/sagemaker_batch.json.out`. The output is in JSONL format, with each line representing a text embedding result. The file contents have the following format: + +``` +{"SageMakerOutput":[[-0.017166402,0.055771016,...],[-0.06422759,-0.004301484,...],"content":["this is chapter 1","harry potter"],"id":1} +{"SageMakerOutput":[[-0.017455402,0.023771016,...],[-0.02322759,-0.009101284,...],"content":["this is chapter 2","draco malfoy"],"id":1} +... +``` + +## Ingesting data from a single file + +First, create a k-NN index into which you'll ingest the data. The fields in the k-NN index represent the structure of the data in the source file. + +In this example, the source file holds documents containing titles and chapters, along with their corresponding embeddings. Thus, you'll create a k-NN index with the fields `id`, `chapter_embedding`, `chapter`, `title_embedding`, and `title`: + +```json +PUT /my-nlp-index +{ + "settings": { + "index.knn": true + }, + "mappings": { + "properties": { + "id": { + "type": "text" + }, + "chapter_embedding": { + "type": "knn_vector", + "dimension": 384, + "method": { + "engine": "nmslib", + "space_type": "cosinesimil", + "name": "hnsw", + "parameters": { + "ef_construction": 512, + "m": 16 + } + } + }, + "chapter": { + "type": "text" + }, + "title_embedding": { + "type": "knn_vector", + "dimension": 384, + "method": { + "engine": "nmslib", + "space_type": "cosinesimil", + "name": "hnsw", + "parameters": { + "ef_construction": 512, + "m": 16 + } + } + }, + "title": { + "type": "text" + } + } + } +} +``` +{% include copy-curl.html %} + +When using an S3 file as the source for asynchronous batch ingestion, you must map the fields in the source file to fields in the index in order to indicate into which index each piece of data is ingested. If no JSON path is provided for a field, that field will be set to `null` in the k-NN index. + +In the `field_map`, indicate the location of the data for each field in the source file. You can also specify fields to be ingested directly into your index without making any changes to the source file by adding their JSON paths to the `ingest_fields` array. For example, in the following asynchronous batch ingestion request, the element with the JSON path `$.id` from the source file is ingested directly into the `id` field of your index. To ingest this data from the Amazon S3 file, send the following request to your OpenSearch endpoint: + +```json +POST /_plugins/_ml/_batch_ingestion +{ + "index_name": "my-nlp-index", + "field_map": { + "chapter": "$.content[0]", + "title": "$.content[1]", + "chapter_embedding": "$.SageMakerOutput[0]", + "title_embedding": "$.SageMakerOutput[1]", + "_id": "$.id" + }, + "ingest_fields": ["$.id"], + "credential": { + "region": "us-east-1", + "access_key": "", + "secret_key": "", + "session_token": "" + }, + "data_source": { + "type": "s3", + "source": ["s3://offlinebatch/output/sagemaker_batch.json.out"] + } +} +``` +{% include copy-curl.html %} + +The response contains a task ID for the ingestion task: + +```json +{ + "task_id": "cbsPlpEBMHcagzGbOQOx", + "task_type": "BATCH_INGEST", + "status": "CREATED" +} +``` + +To check the status of the operation, provide the task ID to the [Tasks API]({{site.url}}{{site.baseurl}}/ml-commons-plugin/api/tasks-apis/get-task/). Once ingestion is complete, the task `state` changes to `COMPLETED`. + + +## Ingesting data from multiple files + +You can also ingest data from multiple files by specifying the file locations in the `source`. The following example ingests data from three OpenAI files. + +The OpenAI Batch API input file is formatted as follows: + +``` +{"custom_id": "request-1", "method": "POST", "url": "/v1/embeddings", "body": {"model": "text-embedding-ada-002", "input": [ "What is the meaning of life?", "The food was delicious and the waiter..."]}} +{"custom_id": "request-2", "method": "POST", "url": "/v1/embeddings", "body": {"model": "text-embedding-ada-002", "input": [ "What is the meaning of work?", "The travel was fantastic and the view..."]}} +{"custom_id": "request-3", "method": "POST", "url": "/v1/embeddings", "body": {"model": "text-embedding-ada-002", "input": [ "What is the meaning of friend?", "The old friend was far away and the time..."]}} +... +``` + +The OpenAI Batch API output file is formatted as follows: + +``` +{"id": "batch_req_ITKQn29igorXCAGp6wzYs5IS", "custom_id": "request-1", "response": {"status_code": 200, "request_id": "10845755592510080d13054c3776aef4", "body": {"object": "list", "data": [{"object": "embedding", "index": 0, "embedding": [0.0044326545, ... ...]}, {"object": "embedding", "index": 1, "embedding": [0.002297497, ... ... ]}], "model": "text-embedding-ada-002", "usage": {"prompt_tokens": 15, "total_tokens": 15}}}, "error": null} +... +``` + +If you have run the Batch API in OpenAI for text embedding and want to ingest the model input and output files along with some metadata into your index, send the following asynchronous ingestion request. Make sure to use `source[file-index]` to identify the file's location in the source array in the request body. For example, `source[0]` refers to the first file in the `data_source.source` array. + +The following request ingests seven fields into your index: Five are specified in the `field_map` section and two are specified in `ingest_fields`. The format follows the pattern `sourcefile.jsonPath`, indicating the JSON path for each file. In the field_map, `$.body.input[0]` is used as the JSON path to ingest data into the `question` field from the second file in the `source` array. The `ingest_fields` array lists all elements from the `source` files that will be ingested directly into your index: + +```json +POST /_plugins/_ml/_batch_ingestion +{ + "index_name": "my-nlp-index-openai", + "field_map": { + "question": "source[1].$.body.input[0]", + "answer": "source[1].$.body.input[1]", + "question_embedding":"source[0].$.response.body.data[0].embedding", + "answer_embedding":"source[0].$.response.body.data[1].embedding", + "_id": ["source[0].$.custom_id", "source[1].$.custom_id"] + }, + "ingest_fields": ["source[2].$.custom_field1", "source[2].$.custom_field2"], + "credential": { + "openAI_key": "" + }, + "data_source": { + "type": "openAI", + "source": ["file-", "file-", "file-"] + } +} +``` +{% include copy-curl.html %} + +In the request, make sure to define the `_id` field in the `field_map`. This is necessary in order to map each data entry from the three separate files. + +The response contains a task ID for the ingestion task: + +```json +{ + "task_id": "cbsPlpEBMHcagzGbOQOx", + "task_type": "BATCH_INGEST", + "status": "CREATED" +} +``` + +To check the status of the operation, provide the task ID to the [Tasks API]({{site.url}}{{site.baseurl}}/ml-commons-plugin/api/tasks-apis/get-task/). Once ingestion is complete, the task `state` changes to `COMPLETED`. + +For request field descriptions, see [Asynchronous Batch Ingestion API]({{site.url}}{{site.baseurl}}/ml-commons-plugin/api/async-batch-ingest/). \ No newline at end of file From 2d34fb75be20b14813a73e1798072e2788911d41 Mon Sep 17 00:00:00 2001 From: Naarcha-AWS <97990722+Naarcha-AWS@users.noreply.github.com> Date: Mon, 16 Sep 2024 14:00:48 -0500 Subject: [PATCH 064/111] Revert "Add documentation for workload management (#8228)" (#8280) This reverts commit f8c4f5c0665ec13a0dd21bc45503525cd0c40bb6. --- .../availability-recovery.md | 5 -- .../workload-management.md | 60 ------------------- 2 files changed, 65 deletions(-) delete mode 100644 _tuning-your-cluster/availability-and-recovery/workload-management.md diff --git a/_install-and-configure/configuring-opensearch/availability-recovery.md b/_install-and-configure/configuring-opensearch/availability-recovery.md index 94960ebe0a..d25396a63f 100644 --- a/_install-and-configure/configuring-opensearch/availability-recovery.md +++ b/_install-and-configure/configuring-opensearch/availability-recovery.md @@ -16,7 +16,6 @@ Availability and recovery settings include settings for the following: - [Shard indexing backpressure](#shard-indexing-backpressure-settings) - [Segment replication](#segment-replication-settings) - [Cross-cluster replication](#cross-cluster-replication-settings) -- [Workload management](#workload-management-settings) To learn more about static and dynamic settings, see [Configuring OpenSearch]({{site.url}}{{site.baseurl}}/install-and-configure/configuring-opensearch/index/). @@ -71,7 +70,3 @@ For information about segment replication backpressure settings, see [Segment re ## Cross-cluster replication settings For information about cross-cluster replication settings, see [Replication settings]({{site.url}}{{site.baseurl}}/tuning-your-cluster/replication-plugin/settings/). - -## Workload management settings - -Workload management is a mechanism that allows administrators to organize queries into distinct groups. For more information, see [Workload management settings]({{site.url}}{{site.baseurl}}/tuning-your-cluster/availability-and-recovery/workload-management/#workload-management-settings). diff --git a/_tuning-your-cluster/availability-and-recovery/workload-management.md b/_tuning-your-cluster/availability-and-recovery/workload-management.md deleted file mode 100644 index 1c6d9baf46..0000000000 --- a/_tuning-your-cluster/availability-and-recovery/workload-management.md +++ /dev/null @@ -1,60 +0,0 @@ ---- -layout: default -title: Workload management -nav_order: 60 -has_children: false -parent: Availability and recovery ---- - -# Workload management - -Workload management is a mechanism that allows administrators to organize queries into distinct groups, referred to as _query groups_. These query groups enable admins to limit the cumulative resource usage of each group, ensuring more balanced and fair resource distribution between them. This mechanism provides greater control over resource consumption so that no single group can monopolize cluster resources at the expense of others. - -## Query group - -A query group is a logical construct designed to manage search requests within defined virtual resource limits. The query group service tracks and aggregates resource usage at the node level for different groups, enforcing restrictions to ensure that no group exceeds its allocated resources. Depending on the configured containment mode, the system can limit or terminate tasks that surpass these predefined thresholds. - -Because the definition of a query group is stored in the cluster state, these resource limits are enforced consistently across all nodes in the cluster. - -### Schema - -Query groups use the following schema: - -```json -{ - "_id": "fafjafjkaf9ag8a9ga9g7ag0aagaga", - "resource_limits": { - "memory": 0.4, - "cpu": 0.2 - }, - "resiliency_mode": "enforced", - "name": "analytics", - "updated_at": 4513232415 -} -``` - -### Resource type - -Resource types represent the various system resources that are monitored and managed across different query groups. The following resource types are supported: - -- CPU usage -- JVM memory usage - -### Resiliency mode - -Resiliency mode determines how the assigned resource limits relate to the actual allowed resource usage. The following resiliency modes are supported: - -- **Soft mode** -- The query group can exceed the query group resource limits if the node is not under duress. -- **Enforced mode** -- The query group will never exceed the assigned limits and will be canceled as soon as the limits are exceeded. -- **Monitor mode** -- The query group will not cause any cancellations and will only log the eligible task cancellations. - -## Workload management settings - -Workload management settings allow you to define thresholds for rejecting or canceling tasks based on resource usage. Adjusting the following settings can help to maintain optimal performance and stability within your OpenSearch cluster. - -Setting | Default | Description -:--- | :--- | :--- -`wlm.query_group.node.memory_rejection_threshold` | `0.8` | The memory-based rejection threshold for query groups at the node level. Tasks that exceed this threshold will be rejected. The maximum allowed value is `0.9`. -`wlm.query_group.node.memory_cancellation_threshold` | `0.9` | The memory-based cancellation threshold for query groups at the node level. Tasks that exceed this threshold will be canceled. The maximum allowed value is `0.95`. -`wlm.query_group.node.cpu_rejection_threshold` | `0.8` | The CPU-based rejection threshold for query groups at the node level. Tasks that exceed this threshold will be rejected. The maximum allowed value is `0.9`. -`wlm.query_group.node.cpu_cancellation_threshold` | `0.9` | The CPU-based cancellation threshold for query groups at the node level. Tasks that exceed this threshold will be canceled. The maximum allowed value is `0.95`. From 4da6b6799be2538b00d4578c4abb21fc507b67f7 Mon Sep 17 00:00:00 2001 From: Zelin Hao Date: Mon, 16 Sep 2024 13:03:57 -0700 Subject: [PATCH 065/111] Add new filters for events and blogs (#8209) * Add new filters for events and blogs Signed-off-by: Zelin Hao * Update the breakline style Signed-off-by: Zelin Hao --------- Signed-off-by: Zelin Hao --- _layouts/search_layout.html | 38 ++++++++++++++++++++++--------------- assets/js/search.js | 2 +- 2 files changed, 24 insertions(+), 16 deletions(-) diff --git a/_layouts/search_layout.html b/_layouts/search_layout.html index 47b8f25d1c..67e877fcb8 100644 --- a/_layouts/search_layout.html +++ b/_layouts/search_layout.html @@ -38,12 +38,16 @@

Filter results

- +
- - + + +
+
+ +
@@ -97,10 +101,7 @@

element.value).join(','); const urlPath = window.location.pathname; const versionMatch = urlPath.match(/(\d+\.\d+)/); const docsVersion = versionMatch ? versionMatch[1] : "latest"; @@ -139,11 +140,12 @@

{ + categoryBlog.addEventListener('change', () => { + updateAllCheckbox(); + triggerSearch(searchInput.value.trim()); + }); + categoryEvent.addEventListener('change', () => { updateAllCheckbox(); triggerSearch(searchInput.value.trim()); }); diff --git a/assets/js/search.js b/assets/js/search.js index 8d9cab2ec5..4d4fce62f3 100644 --- a/assets/js/search.js +++ b/assets/js/search.js @@ -319,7 +319,7 @@ window.doResultsPageSearch = async (query, type, version) => { searchResultsContainer.appendChild(resultElement); const breakline = document.createElement('hr'); - breakline.style.border = '.5px solid #ccc'; + breakline.style.borderTop = '.5px solid #ccc'; breakline.style.margin = 'auto'; searchResultsContainer.appendChild(breakline); }); From e0045f9e8137399f8ba9299e0b36ed0a9fe57d63 Mon Sep 17 00:00:00 2001 From: jonfritz <134336691+jonfritz@users.noreply.github.com> Date: Mon, 16 Sep 2024 13:18:36 -0700 Subject: [PATCH 066/111] Add Sycamore page (#8234) * Create sycamore.md Create Sycamore page Signed-off-by: jonfritz <134336691+jonfritz@users.noreply.github.com> * Update sycamore.md Add to docs Signed-off-by: jonfritz <134336691+jonfritz@users.noreply.github.com> * Update sycamore.md Add info to docs page Signed-off-by: jonfritz <134336691+jonfritz@users.noreply.github.com> * Update _tools/sycamore.md Co-authored-by: kolchfa-aws <105444904+kolchfa-aws@users.noreply.github.com> Signed-off-by: jonfritz <134336691+jonfritz@users.noreply.github.com> * Update _tools/sycamore.md Co-authored-by: kolchfa-aws <105444904+kolchfa-aws@users.noreply.github.com> Signed-off-by: jonfritz <134336691+jonfritz@users.noreply.github.com> * Update _tools/sycamore.md Co-authored-by: kolchfa-aws <105444904+kolchfa-aws@users.noreply.github.com> Signed-off-by: jonfritz <134336691+jonfritz@users.noreply.github.com> * Update _tools/sycamore.md Co-authored-by: kolchfa-aws <105444904+kolchfa-aws@users.noreply.github.com> Signed-off-by: jonfritz <134336691+jonfritz@users.noreply.github.com> * Update _tools/sycamore.md Co-authored-by: kolchfa-aws <105444904+kolchfa-aws@users.noreply.github.com> Signed-off-by: jonfritz <134336691+jonfritz@users.noreply.github.com> * Update _tools/sycamore.md Co-authored-by: kolchfa-aws <105444904+kolchfa-aws@users.noreply.github.com> Signed-off-by: jonfritz <134336691+jonfritz@users.noreply.github.com> * Update _tools/sycamore.md Co-authored-by: kolchfa-aws <105444904+kolchfa-aws@users.noreply.github.com> Signed-off-by: jonfritz <134336691+jonfritz@users.noreply.github.com> * Update _tools/sycamore.md Co-authored-by: kolchfa-aws <105444904+kolchfa-aws@users.noreply.github.com> Signed-off-by: jonfritz <134336691+jonfritz@users.noreply.github.com> * Update _tools/sycamore.md Co-authored-by: kolchfa-aws <105444904+kolchfa-aws@users.noreply.github.com> Signed-off-by: jonfritz <134336691+jonfritz@users.noreply.github.com> * Update _tools/sycamore.md Co-authored-by: kolchfa-aws <105444904+kolchfa-aws@users.noreply.github.com> Signed-off-by: jonfritz <134336691+jonfritz@users.noreply.github.com> * Update sycamore.md Correct typo Signed-off-by: jonfritz <134336691+jonfritz@users.noreply.github.com> * Update _tools/sycamore.md Co-authored-by: kolchfa-aws <105444904+kolchfa-aws@users.noreply.github.com> Signed-off-by: jonfritz <134336691+jonfritz@users.noreply.github.com> * Update _tools/sycamore.md Co-authored-by: kolchfa-aws <105444904+kolchfa-aws@users.noreply.github.com> Signed-off-by: jonfritz <134336691+jonfritz@users.noreply.github.com> * Update _tools/sycamore.md Co-authored-by: kolchfa-aws <105444904+kolchfa-aws@users.noreply.github.com> Signed-off-by: jonfritz <134336691+jonfritz@users.noreply.github.com> * Update _tools/sycamore.md Co-authored-by: kolchfa-aws <105444904+kolchfa-aws@users.noreply.github.com> Signed-off-by: jonfritz <134336691+jonfritz@users.noreply.github.com> * Update sycamore.md Updates from suggestions Signed-off-by: jonfritz <134336691+jonfritz@users.noreply.github.com> * Update index.md Updating index with Sycamore Signed-off-by: jonfritz <134336691+jonfritz@users.noreply.github.com> * Update _tools/index.md Signed-off-by: kolchfa-aws <105444904+kolchfa-aws@users.noreply.github.com> * Add front matter Signed-off-by: Fanit Kolchina * Apply suggestions from code review Co-authored-by: Nathan Bower Signed-off-by: kolchfa-aws <105444904+kolchfa-aws@users.noreply.github.com> --------- Signed-off-by: jonfritz <134336691+jonfritz@users.noreply.github.com> Signed-off-by: kolchfa-aws <105444904+kolchfa-aws@users.noreply.github.com> Signed-off-by: Fanit Kolchina Co-authored-by: kolchfa-aws <105444904+kolchfa-aws@users.noreply.github.com> Co-authored-by: Fanit Kolchina Co-authored-by: Nathan Bower --- _tools/index.md | 7 +++++++ _tools/sycamore.md | 48 ++++++++++++++++++++++++++++++++++++++++++++++ 2 files changed, 55 insertions(+) create mode 100644 _tools/sycamore.md diff --git a/_tools/index.md b/_tools/index.md index 108f10da97..c9d446a81a 100644 --- a/_tools/index.md +++ b/_tools/index.md @@ -18,6 +18,7 @@ This section provides documentation for OpenSearch-supported tools, including: - [OpenSearch CLI](#opensearch-cli) - [OpenSearch Kubernetes operator](#opensearch-kubernetes-operator) - [OpenSearch upgrade, migration, and comparison tools](#opensearch-upgrade-migration-and-comparison-tools) +- [Sycamore](#sycamore) for AI-powered extract, transform, load (ETL) on complex documents for vector and hybrid search For information about Data Prepper, the server-side data collector for filtering, enriching, transforming, normalizing, and aggregating data for downstream analytics and visualization, see [Data Prepper]({{site.url}}{{site.baseurl}}/data-prepper/index/). @@ -122,3 +123,9 @@ The OpenSearch Kubernetes Operator is an open-source Kubernetes operator that he OpenSearch migration tools facilitate migrations to OpenSearch and upgrades to newer versions of OpenSearch. These can help you can set up a proof-of-concept environment locally using Docker containers or deploy to AWS using a one-click deployment script. This empowers you to fine-tune cluster configurations and manage workloads more effectively before migration. For more information about OpenSearch migration tools, see the documentation in the [OpenSearch Migration GitHub repository](https://github.com/opensearch-project/opensearch-migrations/tree/capture-and-replay-v0.1.0). + +## Sycamore + +[Sycamore](https://github.com/aryn-ai/sycamore) is an open-source, AI-powered document processing engine designed to prepare unstructured data for retrieval-augmented generation (RAG) and semantic search using Python. Sycamore supports chunking and enriching a wide range of complex document types, including reports, presentations, transcripts, and manuals. Additionally, Sycamore can extract and process embedded elements, such as tables, figures, graphs, and other infographics. It can then load the data into target indexes, including vector and keyword indexes, using an [OpenSearch connector](https://sycamore.readthedocs.io/en/stable/sycamore/connectors/opensearch.html). + +For more information, see [Sycamore]({{site.url}}{{site.baseurl}}/tools/sycamore/). diff --git a/_tools/sycamore.md b/_tools/sycamore.md new file mode 100644 index 0000000000..7ce55931ac --- /dev/null +++ b/_tools/sycamore.md @@ -0,0 +1,48 @@ +--- +layout: default +title: Sycamore +nav_order: 210 +has_children: false +--- + +# Sycamore + +[Sycamore](https://github.com/aryn-ai/sycamore) is an open-source, AI-powered document processing engine designed to prepare unstructured data for retrieval-augmented generation (RAG) and semantic search using Python. Sycamore supports chunking and enriching a wide range of complex document types, including reports, presentations, transcripts, and manuals. Additionally, Sycamore can extract and process embedded elements, such as tables, figures, graphs, and other infographics. It can then load the data into target indexes, including vector and keyword indexes, using a connector like the [OpenSearch connector](https://sycamore.readthedocs.io/en/stable/sycamore/connectors/opensearch.html). + +To get started, visit the [Sycamore documentation](https://sycamore.readthedocs.io/en/stable/sycamore/get_started.html). + +# Sycamore ETL pipeline structure + +A Sycamore extract, transform, load (ETL) pipeline applies a series of transformations to a [DocSet](https://sycamore.readthedocs.io/en/stable/sycamore/get_started/concepts.html#docsets), which is a collection of documents and their constituent elements (for example, tables, blocks of text, or headers). At the end of the pipeline, the DocSet is loaded into OpenSearch vector and keyword indexes. + +A typical pipeline for preparing unstructured data for vector or hybrid search in OpenSearch consists of the following steps: + +* Read documents into a [DocSet](https://sycamore.readthedocs.io/en/stable/sycamore/get_started/concepts.html#docsets). +* [Partition documents](https://sycamore.readthedocs.io/en/stable/sycamore/transforms/partition.html) into structured JSON elements. +* Extract metadata, filter, and clean data using [transforms](https://sycamore.readthedocs.io/en/stable/sycamore/APIs/docset.html). +* Create [chunks](https://sycamore.readthedocs.io/en/stable/sycamore/transforms/merge.html) from groups of elements. +* Embed the chunks using the model of your choice. +* [Load](https://sycamore.readthedocs.io/en/stable/sycamore/connectors/opensearch.html) the embeddings, metadata, and text into OpenSearch vector and keyword indexes. + +For an example pipeline that uses this workflow, see [this notebook](https://github.com/aryn-ai/sycamore/blob/main/notebooks/opensearch_docs_etl.ipynb). + + +# Install Sycamore + +We recommend installing the Sycamore library using `pip`. The connector for OpenSearch can be specified and installed using extras. For example: + +```bash +pip install sycamore-ai[opensearch] +``` +{% include copy.html %} + +By default, Sycamore works with the Aryn Partitioning Service to process PDFs. To run inference locally for partitioning or embedding, install Sycamore with the `local-inference` extra as follows: + +```bash +pip install sycamore-ai[opensearch,local-inference] +``` +{% include copy.html %} + +## Next steps + +For more information, visit the [Sycamore documentation](https://sycamore.readthedocs.io/en/stable/sycamore/get_started.html). \ No newline at end of file From 242cc6b1e743f742cf1774ef74b545788ec9c318 Mon Sep 17 00:00:00 2001 From: kolchfa-aws <105444904+kolchfa-aws@users.noreply.github.com> Date: Mon, 16 Sep 2024 17:10:56 -0400 Subject: [PATCH 067/111] Update headings and punctuation in sycamore page (#8301) * Update sycamore.md Signed-off-by: kolchfa-aws <105444904+kolchfa-aws@users.noreply.github.com> * Update _tools/sycamore.md Signed-off-by: kolchfa-aws <105444904+kolchfa-aws@users.noreply.github.com> --------- Signed-off-by: kolchfa-aws <105444904+kolchfa-aws@users.noreply.github.com> --- _tools/sycamore.md | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/_tools/sycamore.md b/_tools/sycamore.md index 7ce55931ac..9b3986dbf3 100644 --- a/_tools/sycamore.md +++ b/_tools/sycamore.md @@ -11,7 +11,7 @@ has_children: false To get started, visit the [Sycamore documentation](https://sycamore.readthedocs.io/en/stable/sycamore/get_started.html). -# Sycamore ETL pipeline structure +## Sycamore ETL pipeline structure A Sycamore extract, transform, load (ETL) pipeline applies a series of transformations to a [DocSet](https://sycamore.readthedocs.io/en/stable/sycamore/get_started/concepts.html#docsets), which is a collection of documents and their constituent elements (for example, tables, blocks of text, or headers). At the end of the pipeline, the DocSet is loaded into OpenSearch vector and keyword indexes. @@ -19,7 +19,7 @@ A typical pipeline for preparing unstructured data for vector or hybrid search i * Read documents into a [DocSet](https://sycamore.readthedocs.io/en/stable/sycamore/get_started/concepts.html#docsets). * [Partition documents](https://sycamore.readthedocs.io/en/stable/sycamore/transforms/partition.html) into structured JSON elements. -* Extract metadata, filter, and clean data using [transforms](https://sycamore.readthedocs.io/en/stable/sycamore/APIs/docset.html). +* Extract metadata and filter and clean data using [transforms](https://sycamore.readthedocs.io/en/stable/sycamore/APIs/docset.html). * Create [chunks](https://sycamore.readthedocs.io/en/stable/sycamore/transforms/merge.html) from groups of elements. * Embed the chunks using the model of your choice. * [Load](https://sycamore.readthedocs.io/en/stable/sycamore/connectors/opensearch.html) the embeddings, metadata, and text into OpenSearch vector and keyword indexes. @@ -27,7 +27,7 @@ A typical pipeline for preparing unstructured data for vector or hybrid search i For an example pipeline that uses this workflow, see [this notebook](https://github.com/aryn-ai/sycamore/blob/main/notebooks/opensearch_docs_etl.ipynb). -# Install Sycamore +## Install Sycamore We recommend installing the Sycamore library using `pip`. The connector for OpenSearch can be specified and installed using extras. For example: @@ -45,4 +45,4 @@ pip install sycamore-ai[opensearch,local-inference] ## Next steps -For more information, visit the [Sycamore documentation](https://sycamore.readthedocs.io/en/stable/sycamore/get_started.html). \ No newline at end of file +For more information, visit the [Sycamore documentation](https://sycamore.readthedocs.io/en/stable/sycamore/get_started.html). From cbb3f3c35da4b1a37ab86ce705ebf559d90d3098 Mon Sep 17 00:00:00 2001 From: Mohit Godwani <81609427+mgodwan@users.noreply.github.com> Date: Tue, 17 Sep 2024 03:59:34 +0530 Subject: [PATCH 068/111] Add documentation for context and ABC templates (#8197) * Add documentation for context and ABC templates Signed-off-by: Mohit Godwani * Add information in create index and template Signed-off-by: Mohit Godwani * Apply suggestions from code review Co-authored-by: Prabhakar Sithanandam Signed-off-by: Naarcha-AWS <97990722+Naarcha-AWS@users.noreply.github.com> * Update _api-reference/index-apis/create-index.md Signed-off-by: Naarcha-AWS <97990722+Naarcha-AWS@users.noreply.github.com> * Apply suggestions from code review Signed-off-by: Naarcha-AWS <97990722+Naarcha-AWS@users.noreply.github.com> * Add technical writer review. Signed-off-by: Naarcha-AWS <97990722+Naarcha-AWS@users.noreply.github.com> * Apply suggestions from code review Signed-off-by: Naarcha-AWS <97990722+Naarcha-AWS@users.noreply.github.com> * Apply suggestions from code review Co-authored-by: Mohit Godwani <81609427+mgodwan@users.noreply.github.com> Signed-off-by: Naarcha-AWS <97990722+Naarcha-AWS@users.noreply.github.com> * Apply suggestions from code review Signed-off-by: Naarcha-AWS <97990722+Naarcha-AWS@users.noreply.github.com> * Apply suggestions from code review Signed-off-by: Naarcha-AWS <97990722+Naarcha-AWS@users.noreply.github.com> * Update _im-plugin/index-context.md Co-authored-by: Prabhakar Sithanandam Signed-off-by: Naarcha-AWS <97990722+Naarcha-AWS@users.noreply.github.com> * Apply suggestions from code review Co-authored-by: Prabhakar Sithanandam Signed-off-by: Naarcha-AWS <97990722+Naarcha-AWS@users.noreply.github.com> * Apply suggestions from code review Signed-off-by: Naarcha-AWS <97990722+Naarcha-AWS@users.noreply.github.com> * Apply suggestions from code review Signed-off-by: Naarcha-AWS <97990722+Naarcha-AWS@users.noreply.github.com> * Apply suggestions from code review Co-authored-by: Nathan Bower Signed-off-by: Naarcha-AWS <97990722+Naarcha-AWS@users.noreply.github.com> * Apply suggestions from code review Co-authored-by: Nathan Bower Signed-off-by: Naarcha-AWS <97990722+Naarcha-AWS@users.noreply.github.com> * Apply suggestions from code review Co-authored-by: Nathan Bower Signed-off-by: Naarcha-AWS <97990722+Naarcha-AWS@users.noreply.github.com> --------- Signed-off-by: Mohit Godwani Signed-off-by: Naarcha-AWS <97990722+Naarcha-AWS@users.noreply.github.com> Co-authored-by: Naarcha-AWS <97990722+Naarcha-AWS@users.noreply.github.com> Co-authored-by: Prabhakar Sithanandam Co-authored-by: Nathan Bower --- .../index-apis/create-index-template.md | 2 +- _api-reference/index-apis/create-index.md | 2 +- _im-plugin/index-context.md | 175 ++++++++++++++++++ 3 files changed, 177 insertions(+), 2 deletions(-) create mode 100644 _im-plugin/index-context.md diff --git a/_api-reference/index-apis/create-index-template.md b/_api-reference/index-apis/create-index-template.md index 2a92e3f4c4..ea71126210 100644 --- a/_api-reference/index-apis/create-index-template.md +++ b/_api-reference/index-apis/create-index-template.md @@ -45,7 +45,7 @@ Parameter | Type | Description `priority` | Integer | A number that determines which index templates take precedence during the creation of a new index or data stream. OpenSearch chooses the template with the highest priority. When no priority is given, the template is assigned a `0`, signifying the lowest priority. Optional. `template` | Object | The template that includes the `aliases`, `mappings`, or `settings` for the index. For more information, see [#template]. Optional. `version` | Integer | The version number used to manage index templates. Version numbers are not automatically set by OpenSearch. Optional. - +`context` | Object | (Experimental) The `context` parameter provides use-case-specific predefined templates that can be applied to an index. Among all settings and mappings declared for a template, context templates hold the highest priority. For more information, see [index-context]({{site.url}}{{site.baseurl}}/im-plugin/index-context/). ### Template diff --git a/_api-reference/index-apis/create-index.md b/_api-reference/index-apis/create-index.md index 2f4c1041bc..7f7d26815f 100644 --- a/_api-reference/index-apis/create-index.md +++ b/_api-reference/index-apis/create-index.md @@ -50,7 +50,7 @@ timeout | Time | How long to wait for the request to return. Default is `30s`. ## Request body -As part of your request, you can optionally specify [index settings]({{site.url}}{{site.baseurl}}/im-plugin/index-settings/), [mappings]({{site.url}}{{site.baseurl}}/field-types/index/), and [aliases]({{site.url}}{{site.baseurl}}/opensearch/index-alias/) for your newly created index. +As part of your request, you can optionally specify [index settings]({{site.url}}{{site.baseurl}}/im-plugin/index-settings/), [mappings]({{site.url}}{{site.baseurl}}/field-types/index/), [aliases]({{site.url}}{{site.baseurl}}/opensearch/index-alias/), and [index context]({{site.url}}{{site.baseurl}}/opensearch/index-context/). ## Example request diff --git a/_im-plugin/index-context.md b/_im-plugin/index-context.md new file mode 100644 index 0000000000..be0dbd527d --- /dev/null +++ b/_im-plugin/index-context.md @@ -0,0 +1,175 @@ +--- +layout: default +title: Index context +nav_order: 14 +redirect_from: + - /opensearch/index-context/ +--- + +# Index context + +This is an experimental feature and is not recommended for use in a production environment. For updates on the progress the feature or if you want to leave feedback, join the discussion on the [OpenSearch forum](https://forum.opensearch.org/). +{: .warning} + +Index context declares the use case for an index. Using the context information, OpenSearch applies a predetermined set of settings and mappings, which provides the following benefits: + +- Optimized performance +- Settings tuned to your specific use case +- Accurate mappings and aliases based on [OpenSearch Integrations]({{site.url}}{{site.baseurl}}/integrations/) + +The settings and metadata configuration that are applied using component templates are automatically loaded when your cluster starts. Component templates that start with `@abc_template@` or Application-Based Configuration (ABC) templates can only be used through a `context` object declaration, in order to prevent configuration issues. +{: .warning} + + +## Installation + +To install the index context feature: + +1. Install the `opensearch-system-templates` plugin on all nodes in your cluster using one of the [installation methods]({{site.url}}{{site.baseurl}}/install-and-configure/plugins/#install). + +2. Set the feature flag `opensearch.experimental.feature.application_templates.enabled` to `true`. For more information about enabling and disabling feature flags, see [Enabling experimental features]({{site.url}}{{site.baseurl}}/install-and-configure/configuring-opensearch/experimental/). + +3. Set the `cluster.application_templates.enabled` setting to `true`. For instructions on how to configure OpenSearch, see [configuring settings]({{site.url}}{{site.baseurl}}/install-and-configure/configuring-opensearch/index/#static-settings). + +## Using the `context` setting + +Use the `context` setting with the Index API to add use-case-specific context. + +### Considerations + +Consider the following when using the `context` parameter during index creation: + +1. If you use the `context` parameter to create an index, you cannot include any settings declared in the index context during index creation or dynamic settings updates. +2. The index context becomes permanent when set on an index or index template. + +When you adhere to these limitations, suggested configurations or mappings are uniformly applied on indexed data within the specified context. + +### Examples + +The following examples show how to use index context. + + +#### Create an index + +The following example request creates an index in which to store metric data by declaring a `metrics` mapping as the context: + +```json +PUT /my-metrics-index +{ + "context": { + "name": "metrics" + } +} +``` +{% include copy-curl.html %} + +After creation, the context is added to the index and the corresponding settings are applied: + + +**GET request** + +```json +GET /my-metrics-index +``` +{% include copy-curl.html %} + + +**Response** + +```json +{ + "my-metrics-index": { + "aliases": {}, + "mappings": {}, + "settings": { + "index": { + "codec": "zstd_no_dict", + "refresh_interval": "60s", + "number_of_shards": "1", + "provided_name": "my-metrics-index", + "merge": { + "policy": "log_byte_size" + }, + "context": { + "created_version": "1", + "current_version": "1" + }, + ... + } + }, + "context": { + "name": "metrics", + "version": "_latest" + } + } +} +``` + + +#### Create an index template + +You can also use the `context` parameter when creating an index template. The following example request creates an index template with the context information as `logs`: + +```json +PUT _index_template/my-logs +{ + "context": { + "name": "logs", + "version": "1" + }, + "index_patterns": [ + "my-logs-*" + ] +} +``` +{% include copy-curl.html %} + +All indexes created using this index template will get the metadata provided by the associated component template. The following request and response show how `context` is added to the template: + +**Get index template** + +```json +GET _index_template/my-logs +``` +{% include copy-curl.html %} + +**Response** + +```json +{ + "index_templates": [ + { + "name": "my-logs2", + "index_template": { + "index_patterns": [ + "my-logs1-*" + ], + "context": { + "name": "logs", + "version": "1" + } + } + } + ] +} +``` + +If there is any conflict between any settings, mappings, or aliases directly declared by your template and the backing component template for the context, the latter gets higher priority during index creation. + + +## Available context templates + +The following templates are available to be used through the `context` parameter as of OpenSearch 2.17: + +- `logs` +- `metrics` +- `nginx-logs` +- `amazon-cloudtrail-logs` +- `amazon-elb-logs` +- `amazon-s3-logs` +- `apache-web-logs` +- `k8s-logs` + +For more information about these templates, see the [OpenSearch system templates repository](https://github.com/opensearch-project/opensearch-system-templates/tree/main/src/main/resources/org/opensearch/system/applicationtemplates/v1). + +To view the current version of these templates on your cluster, use `GET /_component_template`. From d15c7bf32c6e2a6076c25c7c91ba2b795f84d19b Mon Sep 17 00:00:00 2001 From: Lakshya Taragi <157457166+ltaragi@users.noreply.github.com> Date: Tue, 17 Sep 2024 03:59:56 +0530 Subject: [PATCH 069/111] Add documentation changes for Snapshot Status API (#8235) * Snapshot Status API documentation changes for shallow v2 snapshots Signed-off-by: Lakshya Taragi * Apply suggestions from code review Signed-off-by: Naarcha-AWS <97990722+Naarcha-AWS@users.noreply.github.com> * Apply suggestions from code review Signed-off-by: Naarcha-AWS <97990722+Naarcha-AWS@users.noreply.github.com> * Update get-snapshot-status.md Signed-off-by: Naarcha-AWS <97990722+Naarcha-AWS@users.noreply.github.com> * Apply suggestions from code review Co-authored-by: Nathan Bower Signed-off-by: Naarcha-AWS <97990722+Naarcha-AWS@users.noreply.github.com> * Apply suggestions from code review Signed-off-by: Naarcha-AWS <97990722+Naarcha-AWS@users.noreply.github.com> * Apply suggestions from code review Co-authored-by: Nathan Bower Signed-off-by: Naarcha-AWS <97990722+Naarcha-AWS@users.noreply.github.com> --------- Signed-off-by: Lakshya Taragi Signed-off-by: Naarcha-AWS <97990722+Naarcha-AWS@users.noreply.github.com> Co-authored-by: Naarcha-AWS <97990722+Naarcha-AWS@users.noreply.github.com> Co-authored-by: Nathan Bower --- .../snapshots/get-snapshot-status.md | 26 ++++++++++++------- 1 file changed, 17 insertions(+), 9 deletions(-) diff --git a/_api-reference/snapshots/get-snapshot-status.md b/_api-reference/snapshots/get-snapshot-status.md index c7f919bcb3..9636b40d64 100644 --- a/_api-reference/snapshots/get-snapshot-status.md +++ b/_api-reference/snapshots/get-snapshot-status.md @@ -22,8 +22,9 @@ Path parameters are optional. | Parameter | Data type | Description | :--- | :--- | :--- -| repository | String | Repository containing the snapshot. | -| snapshot | String | Snapshot to return. | +| repository | String | The repository containing the snapshot. | +| snapshot | List | The snapshot(s) to return. | +| index | List | The indexes to include in the response. | Three request variants provide flexibility: @@ -31,16 +32,23 @@ Three request variants provide flexibility: * `GET _snapshot//_status` returns all currently running snapshots in the specified repository. This is the preferred variant. -* `GET _snapshot///_status` returns detailed status information for a specific snapshot in the specified repository, regardless of whether it's currently running or not. +* `GET _snapshot///_status` returns detailed status information for a specific snapshot(s) in the specified repository, regardless of whether it's currently running. -Using the API to return state for other than currently running snapshots can be very costly for (1) machine machine resources and (2) processing time if running in the cloud. For each snapshot, each request causes file reads from all a snapshot's shards. +* `GET /_snapshot////_status` returns detailed status information only for the specified indexes in a specific snapshot in the specified repository. Note that this endpoint works only for indexes belonging to a specific snapshot. + +Snapshot API calls only work if the total number of shards across the requested resources, such as snapshots and indexes created from snapshots, is smaller than the limit specified by the following cluster setting: + +- `snapshot.max_shards_allowed_in_status_api`(Dynamic, integer): The maximum number of shards that can be included in the Snapshot Status API response. Default value is `200000`. Not applicable for [shallow snapshots v2]({{site.url}}{{site.baseurl}}/tuning-your-cluster/availability-and-recovery/remote-store/snapshot-interoperability##shallow-snapshot-v2), where the total number and sizes of files are returned as 0. + + +Using the API to return the state of snapshots that are not currently running can be very costly in terms of both machine resources and processing time when querying data in the cloud. For each snapshot, each request causes a file read of all of the snapshot's shards. {: .warning} ## Request fields | Field | Data type | Description | :--- | :--- | :--- -| ignore_unavailable | Boolean | How to handles requests for unavailable snapshots. If `false`, the request returns an error for unavailable snapshots. If `true`, the request ignores unavailable snapshots, such as those that are corrupted or temporarily cannot be returned. Defaults to `false`.| +| ignore_unavailable | Boolean | How to handle requests for unavailable snapshots and indexes. If `false`, the request returns an error for unavailable snapshots and indexes. If `true`, the request ignores unavailable snapshots and indexes, such as those that are corrupted or temporarily cannot be returned. Default is `false`.| ## Example request @@ -375,18 +383,18 @@ The `GET _snapshot/my-opensearch-repo/my-first-snapshot/_status` request returns :--- | :--- | :--- | repository | String | Name of repository that contains the snapshot. | | snapshot | String | Snapshot name. | -| uuid | String | Snapshot Universally unique identifier (UUID). | +| uuid | String | A snapshot's universally unique identifier (UUID). | | state | String | Snapshot's current status. See [Snapshot states](#snapshot-states). | | include_global_state | Boolean | Whether the current cluster state is included in the snapshot. | | shards_stats | Object | Snapshot's shard counts. See [Shard stats](#shard-stats). | -| stats | Object | Details of files included in the snapshot. `file_count`: number of files. `size_in_bytes`: total of all fie sizes. See [Snapshot file stats](#snapshot-file-stats). | +| stats | Object | Information about files included in the snapshot. `file_count`: number of files. `size_in_bytes`: total size of all files. See [Snapshot file stats](#snapshot-file-stats). | | index | list of Objects | List of objects that contain information about the indices in the snapshot. See [Index objects](#index-objects).| ##### Snapshot states | State | Description | :--- | :--- | -| FAILED | The snapshot terminated in an error and no data was stored. | +| FAILED | The snapshot terminated in an error and no data was stored. | | IN_PROGRESS | The snapshot is currently running. | | PARTIAL | The global cluster state was stored, but data from at least one shard was not stored. The `failures` property of the [Create snapshot]({{site.url}}{{site.baseurl}}/api-reference/snapshots/create-snapshot) response contains additional details. | | SUCCESS | The snapshot finished and all shards were stored successfully. | @@ -420,4 +428,4 @@ All property values are Integers. :--- | :--- | :--- | | shards_stats | Object | See [Shard stats](#shard-stats). | | stats | Object | See [Snapshot file stats](#snapshot-file-stats). | -| shards | list of Objects | List of objects containing information about the shards that include the snapshot. OpenSearch returns the following properties about the shards.

**stage**: Current state of shards in the snapshot. Shard states are:

* DONE: Number of shards in the snapshot that were successfully stored in the repository.

* FAILURE: Number of shards in the snapshot that were not successfully stored in the repository.

* FINALIZE: Number of shards in the snapshot that are in the finalizing stage of being stored in the repository.

* INIT: Number of shards in the snapshot that are in the initializing stage of being stored in the repository.

* STARTED: Number of shards in the snapshot that are in the started stage of being stored in the repository.

**stats**: See [Snapshot file stats](#snapshot-file-stats).

**total**: Total number and size of files referenced by the snapshot.

**start_time_in_millis**: Time (in milliseconds) when snapshot creation began.

**time_in_millis**: Total time (in milliseconds) that the snapshot took to complete. | +| shards | List of objects | Contains information about the shards included in the snapshot. OpenSearch returns the following properties about the shard:

**stage**: The current state of shards in the snapshot. Shard states are:

* DONE: The number of shards in the snapshot that were successfully stored in the repository.

* FAILURE: The number of shards in the snapshot that were not successfully stored in the repository.

* FINALIZE: The number of shards in the snapshot that are in the finalizing stage of being stored in the repository.

* INIT: The number of shards in the snapshot that are in the initializing stage of being stored in the repository.

* STARTED: The number of shards in the snapshot that are in the started stage of being stored in the repository.

**stats**: See [Snapshot file stats](#snapshot-file-stats).

**total**: The total number and sizes of files referenced by the snapshot.

**start_time_in_millis**: The time (in milliseconds) when snapshot creation began.

**time_in_millis**: The total amount of time (in milliseconds) that the snapshot took to complete. | From 709a54f59cc96f615f2d66572222df64e436ffb1 Mon Sep 17 00:00:00 2001 From: Vikasht34 Date: Tue, 17 Sep 2024 08:19:49 -0700 Subject: [PATCH 070/111] Documentation for Binary Quantization Support with KNN Vector Search (#8281) * Documentation for Binary Quantization Support with KNN Vector Search Signed-off-by: VIKASH TIWARI * Apply suggestions from code review Signed-off-by: kolchfa-aws <105444904+kolchfa-aws@users.noreply.github.com> * Apply suggestions from code review Signed-off-by: kolchfa-aws <105444904+kolchfa-aws@users.noreply.github.com> * Update _search-plugins/knn/knn-vector-quantization.md Signed-off-by: kolchfa-aws <105444904+kolchfa-aws@users.noreply.github.com> * Update _search-plugins/knn/knn-vector-quantization.md Signed-off-by: kolchfa-aws <105444904+kolchfa-aws@users.noreply.github.com> * Apply suggestions from code review Signed-off-by: kolchfa-aws <105444904+kolchfa-aws@users.noreply.github.com> * Apply suggestions from code review Co-authored-by: Nathan Bower Signed-off-by: kolchfa-aws <105444904+kolchfa-aws@users.noreply.github.com> * Update _search-plugins/knn/knn-vector-quantization.md Signed-off-by: kolchfa-aws <105444904+kolchfa-aws@users.noreply.github.com> --------- Signed-off-by: VIKASH TIWARI Signed-off-by: kolchfa-aws <105444904+kolchfa-aws@users.noreply.github.com> Co-authored-by: kolchfa-aws <105444904+kolchfa-aws@users.noreply.github.com> Co-authored-by: Nathan Bower --- .../knn/knn-vector-quantization.md | 174 +++++++++++++++++- 1 file changed, 173 insertions(+), 1 deletion(-) diff --git a/_search-plugins/knn/knn-vector-quantization.md b/_search-plugins/knn/knn-vector-quantization.md index fbdcb4ad2e..508f9e6535 100644 --- a/_search-plugins/knn/knn-vector-quantization.md +++ b/_search-plugins/knn/knn-vector-quantization.md @@ -11,7 +11,7 @@ has_math: true By default, the k-NN plugin supports the indexing and querying of vectors of type `float`, where each dimension of the vector occupies 4 bytes of memory. For use cases that require ingestion on a large scale, keeping `float` vectors can be expensive because OpenSearch needs to construct, load, save, and search graphs (for native `nmslib` and `faiss` engines). To reduce the memory footprint, you can use vector quantization. -OpenSearch supports many varieties of quantization. In general, the level of quantization will provide a trade-off between the accuracy of the nearest neighbor search and the size of the memory footprint consumed by the vector search. The supported types include byte vectors, 16-bit scalar quantization, and product quantization (PQ). +OpenSearch supports many varieties of quantization. In general, the level of quantization will provide a trade-off between the accuracy of the nearest neighbor search and the size of the memory footprint consumed by the vector search. The supported types include byte vectors, 16-bit scalar quantization, product quantization (PQ), and binary quantization(BQ). ## Byte vectors @@ -310,3 +310,175 @@ For example, assume that you have 1 million vectors with a dimension of 256, `iv ```r 1.1*((8 / 8 * 64 + 24) * 1000000 + 100 * (2^8 * 4 * 256 + 4 * 512 * 256)) ~= 0.171 GB ``` + +## Binary quantization + +Starting with version 2.17, OpenSearch supports BQ with binary vector support for the Faiss engine. BQ compresses vectors into a binary format (0s and 1s), making it highly efficient in terms of memory usage. You can choose to represent each vector dimension using 1, 2, or 4 bits, depending on the desired precision. One of the advantages of using BQ is that the training process is handled automatically during indexing. This means that no separate training step is required, unlike other quantization techniques such as PQ. + +### Using BQ +To configure BQ for the Faiss engine, define a `knn_vector` field and specify the `mode` as `on_disk`. This configuration defaults to 1-bit BQ and both `ef_search` and `ef_construction` set to `100`: + +```json +PUT my-vector-index +{ + "mappings": { + "properties": { + "my_vector_field": { + "type": "knn_vector", + "dimension": 8, + "space_type": "l2", + "data_type": "float", + "mode": "on_disk" + } + } + } +} +``` +{% include copy-curl.html %} + +To further optimize the configuration, you can specify additional parameters, such as the compression level, and fine-tune the search parameters. For example, you can override the `ef_construction` value or define the compression level, which corresponds to the number of bits used for quantization: + +- **32x compression** for 1-bit quantization +- **16x compression** for 2-bit quantization +- **8x compression** for 4-bit quantization + +This allows for greater control over memory usage and recall performance, providing flexibility to balance between precision and storage efficiency. + +To specify the compression level, set the `compression_level` parameter: + +```json +PUT my-vector-index +{ + "mappings": { + "properties": { + "my_vector_field": { + "type": "knn_vector", + "dimension": 8, + "space_type": "l2", + "data_type": "float", + "mode": "on_disk", + "compression_level": "16x", + "method": { + "params": { + "ef_construction": 16 + } + } + } + } + } +} +``` +{% include copy-curl.html %} + +The following example further fine-tunes the configuration by defining `ef_construction`, `encoder`, and the number of `bits` (which can be `1`, `2`, or `4`): + +```json +PUT my-vector-index +{ + "mappings": { + "properties": { + "my_vector_field": { + "type": "knn_vector", + "dimension": 8, + "method": { + "name": "hnsw", + "engine": "faiss", + "space_type": "l2", + "params": { + "m": 16, + "ef_construction": 512, + "encoder": { + "name": "binary", + "parameters": { + "bits": 1 + } + } + } + } + } + } + } +} +``` +{% include copy-curl.html %} + +### Search using binary quantized vectors + +You can perform a k-NN search on your index by providing a vector and specifying the number of nearest neighbors (k) to return: + +```json +GET my-vector-index/_search +{ + "size": 2, + "query": { + "knn": { + "my_vector_field": { + "vector": [1.5, 5.5, 1.5, 5.5, 1.5, 5.5, 1.5, 5.5], + "k": 10 + } + } + } +} +``` +{% include copy-curl.html %} + +You can also fine-tune search by providing the `ef_search` and `oversample_factor` parameters. +The `oversample_factor` parameter controls the factor by which the search oversamples the candidate vectors before ranking them. Using a higher oversample factor means that more candidates will be considered before ranking, improving accuracy but also increasing search time. When selecting the `oversample_factor` value, consider the trade-off between accuracy and efficiency. For example, setting the `oversample_factor` to `2.0` will double the number of candidates considered during the ranking phase, which may help achieve better results. + +The following request specifies the `ef_search` and `oversample_factor` parameters: + +```json +GET my-vector-index/_search +{ + "size": 2, + "query": { + "knn": { + "my_vector_field": { + "vector": [1.5, 5.5, 1.5, 5.5, 1.5, 5.5, 1.5, 5.5], + "k": 10, + "method_params": { + "ef_search": 10 + }, + "rescore": { + "oversample_factor": 10.0 + } + } + } + } +} +``` +{% include copy-curl.html %} + + +#### HNSW memory estimation + +The memory required for the Hierarchical Navigable Small World (HNSW) graph can be estimated as `1.1 * (dimension + 8 * m)` bytes/vector, where `m` is the maximum number of bidirectional links created for each element during the construction of the graph. + +As an example, assume that you have 1 million vectors with a dimension of 256 and an `m` of 16. The following sections provide memory requirement estimations for various compression values. + +##### 1-bit quantization (32x compression) + +In 1-bit quantization, each dimension is represented using 1 bit, equivalent to a 32x compression factor. The memory requirement can be estimated as follows: + +```r +Memory = 1.1 * ((256 * 1 / 8) + 8 * 16) * 1,000,000 + ~= 0.176 GB +``` + +##### 2-bit quantization (16x compression) + +In 2-bit quantization, each dimension is represented using 2 bits, equivalent to a 16x compression factor. The memory requirement can be estimated as follows: + +```r +Memory = 1.1 * ((256 * 2 / 8) + 8 * 16) * 1,000,000 + ~= 0.211 GB +``` + +##### 4-bit quantization (8x compression) + +In 4-bit quantization, each dimension is represented using 4 bits, equivalent to an 8x compression factor. The memory requirement can be estimated as follows: + +```r +Memory = 1.1 * ((256 * 4 / 8) + 8 * 16) * 1,000,000 + ~= 0.282 GB +``` From db292d93250fb737d0648fa8c80d86e4a44981b1 Mon Sep 17 00:00:00 2001 From: Bhavana Ramaram Date: Tue, 17 Sep 2024 13:01:44 -0500 Subject: [PATCH 071/111] Get offline batch inference details using task API in m (#8305) * get offline batch inference details using task API in ml Signed-off-by: Bhavana Ramaram * Doc review Signed-off-by: Fanit Kolchina * Typo fix Signed-off-by: Fanit Kolchina * Apply suggestions from code review Co-authored-by: Nathan Bower Signed-off-by: kolchfa-aws <105444904+kolchfa-aws@users.noreply.github.com> * Update _ml-commons-plugin/api/model-apis/batch-predict.md Signed-off-by: kolchfa-aws <105444904+kolchfa-aws@users.noreply.github.com> * Update _ml-commons-plugin/api/model-apis/batch-predict.md Signed-off-by: kolchfa-aws <105444904+kolchfa-aws@users.noreply.github.com> * Add parameter values Signed-off-by: Fanit Kolchina * Extra spaces Signed-off-by: Fanit Kolchina --------- Signed-off-by: Bhavana Ramaram Signed-off-by: Fanit Kolchina Signed-off-by: kolchfa-aws <105444904+kolchfa-aws@users.noreply.github.com> Co-authored-by: Fanit Kolchina Co-authored-by: kolchfa-aws <105444904+kolchfa-aws@users.noreply.github.com> Co-authored-by: Nathan Bower --- .../api/model-apis/batch-predict.md | 140 +++++++++++++----- 1 file changed, 102 insertions(+), 38 deletions(-) diff --git a/_ml-commons-plugin/api/model-apis/batch-predict.md b/_ml-commons-plugin/api/model-apis/batch-predict.md index b32fbb108d..c1dc7348fe 100644 --- a/_ml-commons-plugin/api/model-apis/batch-predict.md +++ b/_ml-commons-plugin/api/model-apis/batch-predict.md @@ -31,7 +31,13 @@ POST /_plugins/_ml/models//_batch_predict ## Prerequisites -Before using the Batch Predict API, you need to create a connector to the externally hosted model. For example, to create a connector to an OpenAI `text-embedding-ada-002` model, send the following request: +Before using the Batch Predict API, you need to create a connector to the externally hosted model. For each action, specify the `action_type` parameter that describes the action: + +- `batch_predict`: Runs the batch predict operation. +- `batch_predict_status`: Checks the batch predict operation status. +- `cancel_batch_predict`: Cancels the batch predict operation. + +For example, to create a connector to an OpenAI `text-embedding-ada-002` model, send the following request. The `cancel_batch_predict` action is optional and supports canceling the batch job running on OpenAI: ```json POST /_plugins/_ml/connectors/_create @@ -68,6 +74,22 @@ POST /_plugins/_ml/connectors/_create "Authorization": "Bearer ${credential.openAI_key}" }, "request_body": "{ \"input_file_id\": \"${parameters.input_file_id}\", \"endpoint\": \"${parameters.endpoint}\", \"completion_window\": \"24h\" }" + }, + { + "action_type": "batch_predict_status", + "method": "GET", + "url": "https://api.openai.com/v1/batches/${parameters.id}", + "headers": { + "Authorization": "Bearer ${credential.openAI_key}" + } + }, + { + "action_type": "cancel_batch_predict", + "method": "POST", + "url": "https://api.openai.com/v1/batches/${parameters.id}/cancel", + "headers": { + "Authorization": "Bearer ${credential.openAI_key}" + } } ] } @@ -123,45 +145,87 @@ POST /_plugins/_ml/models/lyjxwZABNrAVdFa9zrcZ/_batch_predict #### Example response +The response contains the task ID for the batch predict operation: + ```json { - "inference_results": [ - { - "output": [ - { - "name": "response", - "dataAsMap": { - "id": "batch_", - "object": "batch", - "endpoint": "/v1/embeddings", - "errors": null, - "input_file_id": "file-", - "completion_window": "24h", - "status": "validating", - "output_file_id": null, - "error_file_id": null, - "created_at": 1722037257, - "in_progress_at": null, - "expires_at": 1722123657, - "finalizing_at": null, - "completed_at": null, - "failed_at": null, - "expired_at": null, - "cancelling_at": null, - "cancelled_at": null, - "request_counts": { - "total": 0, - "completed": 0, - "failed": 0 - }, - "metadata": null - } - } - ], - "status_code": 200 - } - ] + "task_id": "KYZSv5EBqL2d0mFvs80C", + "status": "CREATED" } ``` -For the definition of each field in the result, see [OpenAI Batch API](https://platform.openai.com/docs/guides/batch). Once the batch inference is complete, you can download the output by calling the [OpenAI Files API](https://platform.openai.com/docs/api-reference/files) and providing the file name specified in the `id` field of the response. \ No newline at end of file +To check the status of the batch predict job, provide the task ID to the [Tasks API]({{site.url}}{{site.baseurl}}/ml-commons-plugin/api/tasks-apis/get-task/). You can find the job details in the `remote_job` field in the task. Once the prediction is complete, the task `state` changes to `COMPLETED`. + +#### Example request + +```json +GET /_plugins/_ml/tasks/KYZSv5EBqL2d0mFvs80C +``` +{% include copy-curl.html %} + +#### Example response + +The response contains the batch predict operation details in the `remote_job` field: + +```json +{ + "model_id": "JYZRv5EBqL2d0mFvKs1E", + "task_type": "BATCH_PREDICTION", + "function_name": "REMOTE", + "state": "RUNNING", + "input_type": "REMOTE", + "worker_node": [ + "Ee5OCIq0RAy05hqQsNI1rg" + ], + "create_time": 1725491751455, + "last_update_time": 1725491751455, + "is_async": false, + "remote_job": { + "cancelled_at": null, + "metadata": null, + "request_counts": { + "total": 3, + "completed": 3, + "failed": 0 + }, + "input_file_id": "file-XXXXXXXXXXXX", + "output_file_id": "file-XXXXXXXXXXXXX", + "error_file_id": null, + "created_at": 1725491753, + "in_progress_at": 1725491753, + "expired_at": null, + "finalizing_at": 1725491757, + "completed_at": null, + "endpoint": "/v1/embeddings", + "expires_at": 1725578153, + "cancelling_at": null, + "completion_window": "24h", + "id": "batch_XXXXXXXXXXXXXXX", + "failed_at": null, + "errors": null, + "object": "batch", + "status": "in_progress" + } +} +``` + +For the definition of each field in the result, see [OpenAI Batch API](https://platform.openai.com/docs/guides/batch). Once the batch inference is complete, you can download the output by calling the [OpenAI Files API](https://platform.openai.com/docs/api-reference/files) and providing the file name specified in the `id` field of the response. + +### Canceling a batch predict job + +You can also cancel the batch predict operation running on the remote platform using the task ID returned by the batch predict request. To add this capability, set the `action_type` to `cancel_batch_predict` in the connector configuration when creating the connector. + +#### Example request + +```json +POST /_plugins/_ml/tasks/KYZSv5EBqL2d0mFvs80C/_cancel_batch +``` +{% include copy-curl.html %} + +#### Example response + +```json +{ + "status": "OK" +} +``` From 22975b9e1a5d72684bf69e76f2896dd9875ce96a Mon Sep 17 00:00:00 2001 From: kolchfa-aws <105444904+kolchfa-aws@users.noreply.github.com> Date: Tue, 17 Sep 2024 17:04:50 -0400 Subject: [PATCH 072/111] Add 2.17 version (#8308) Signed-off-by: Fanit Kolchina --- _config.yml | 6 +++--- _data/versions.json | 7 ++++--- 2 files changed, 7 insertions(+), 6 deletions(-) diff --git a/_config.yml b/_config.yml index 8a43e2f61a..4ead6344c2 100644 --- a/_config.yml +++ b/_config.yml @@ -5,9 +5,9 @@ baseurl: "/docs/latest" # the subpath of your site, e.g. /blog url: "https://opensearch.org" # the base hostname & protocol for your site, e.g. http://example.com permalink: /:path/ -opensearch_version: '2.16.0' -opensearch_dashboards_version: '2.16.0' -opensearch_major_minor_version: '2.16' +opensearch_version: '2.17.0' +opensearch_dashboards_version: '2.17.0' +opensearch_major_minor_version: '2.17' lucene_version: '9_11_1' # Build settings diff --git a/_data/versions.json b/_data/versions.json index 4f7e55c21b..c14e91fa0c 100644 --- a/_data/versions.json +++ b/_data/versions.json @@ -1,10 +1,11 @@ { - "current": "2.16", + "current": "2.17", "all": [ - "2.16", + "2.17", "1.3" ], "archived": [ + "2.16", "2.15", "2.14", "2.13", @@ -25,7 +26,7 @@ "1.1", "1.0" ], - "latest": "2.16" + "latest": "2.17" } From a1a15c04ea1e453f02d1f4ce1c23e03a9f1bbae7 Mon Sep 17 00:00:00 2001 From: kolchfa-aws <105444904+kolchfa-aws@users.noreply.github.com> Date: Tue, 17 Sep 2024 17:08:49 -0400 Subject: [PATCH 073/111] Add release notes 2.17 (#8311) Signed-off-by: Fanit Kolchina --- ...arch-documentation-release-notes-2.17.0.md | 36 +++++++++++++++++++ 1 file changed, 36 insertions(+) create mode 100644 release-notes/opensearch-documentation-release-notes-2.17.0.md diff --git a/release-notes/opensearch-documentation-release-notes-2.17.0.md b/release-notes/opensearch-documentation-release-notes-2.17.0.md new file mode 100644 index 0000000000..d9ed51737c --- /dev/null +++ b/release-notes/opensearch-documentation-release-notes-2.17.0.md @@ -0,0 +1,36 @@ +# OpenSearch Documentation Website 2.17.0 Release Notes + +The OpenSearch 2.17.0 documentation includes the following additions and updates. + +## New documentation for 2.17.0 + +- Get offline batch inference details using task API in m [#8305](https://github.com/opensearch-project/documentation-website/pull/8305) +- Documentation for Binary Quantization Support with KNN Vector Search [#8281](https://github.com/opensearch-project/documentation-website/pull/8281) +- add offline batch ingestion tech doc [#8251](https://github.com/opensearch-project/documentation-website/pull/8251) +- Add documentation changes for disk-based k-NN [#8246](https://github.com/opensearch-project/documentation-website/pull/8246) +- Derived field updates for 2.17 [#8244](https://github.com/opensearch-project/documentation-website/pull/8244) +- Add changes for multiple signing keys [#8243](https://github.com/opensearch-project/documentation-website/pull/8243) +- Add documentation changes for Snapshot Status API [#8235](https://github.com/opensearch-project/documentation-website/pull/8235) +- Update flow framework additional fields in previous_node_inputs [#8233](https://github.com/opensearch-project/documentation-website/pull/8233) +- Add documentation changes for shallow snapshot v2 [#8207](https://github.com/opensearch-project/documentation-website/pull/8207) +- Add documentation for context and ABC templates [#8197](https://github.com/opensearch-project/documentation-website/pull/8197) +- Create documentation for snapshots with hashed prefix path type [#8196](https://github.com/opensearch-project/documentation-website/pull/8196) +- Adding documentation for remote index use in AD [#8191](https://github.com/opensearch-project/documentation-website/pull/8191) +- Doc update for concurrent search [#8181](https://github.com/opensearch-project/documentation-website/pull/8181) +- Adding new cluster search setting docs [#8180](https://github.com/opensearch-project/documentation-website/pull/8180) +- Add new settings for remote publication [#8176](https://github.com/opensearch-project/documentation-website/pull/8176) +- Grouping Top N queries documentation [#8173](https://github.com/opensearch-project/documentation-website/pull/8173) +- Document reprovision param for Update Workflow API [#8172](https://github.com/opensearch-project/documentation-website/pull/8172) +- Add documentation for Faiss byte vector [#8170](https://github.com/opensearch-project/documentation-website/pull/8170) +- Terms query can accept encoded terms input as bitmap [#8133](https://github.com/opensearch-project/documentation-website/pull/8133) +- Update doc for adding new param in cat shards action for cancellation… [#8127](https://github.com/opensearch-project/documentation-website/pull/8127) +- Add docs on skip_validating_missing_parameters in ml-commons connector [#8118](https://github.com/opensearch-project/documentation-website/pull/8118) +- Add Split Response Processor to 2.17 Search Pipeline docs [#8081](https://github.com/opensearch-project/documentation-website/pull/8081) +- Added documentation for FGAC for Flow Framework [#8076](https://github.com/opensearch-project/documentation-website/pull/8076) +- Remove composite agg limitations for concurrent search [#7904](https://github.com/opensearch-project/documentation-website/pull/7904) +- Add doc for nodes stats search.request.took fields [#7887](https://github.com/opensearch-project/documentation-website/pull/7887) +- Add documentation for ignore_hosts config option for ip-based rate limiting [#7859](https://github.com/opensearch-project/documentation-website/pull/7859) + +## Documentation for 2.17.0 experimental features + +- Document new experimental ingestion streaming APIs [#8123](https://github.com/opensearch-project/documentation-website/pull/8123) From 842cd9e1fe6d9aa853fa13fc1ed7878d750b1fb5 Mon Sep 17 00:00:00 2001 From: kolchfa-aws <105444904+kolchfa-aws@users.noreply.github.com> Date: Tue, 17 Sep 2024 17:16:33 -0400 Subject: [PATCH 074/111] Add 2.17 to version history (#8309) Signed-off-by: Fanit Kolchina --- _about/version-history.md | 1 + 1 file changed, 1 insertion(+) diff --git a/_about/version-history.md b/_about/version-history.md index fd635aff5b..47253558e9 100644 --- a/_about/version-history.md +++ b/_about/version-history.md @@ -9,6 +9,7 @@ permalink: /version-history/ OpenSearch version | Release highlights | Release date :--- | :--- | :--- +[2.17.0](https://github.com/opensearch-project/opensearch-build/blob/main/release-notes/opensearch-release-notes-2.17.0.md) | Includes disk-optimized vector search, binary quantization, and byte vector encoding in k-NN. Adds asynchronous batch ingestion for ML tasks. Provides search and query performance enhancements and a new custom trace source in trace analytics. Includes application-based configuration templates. For a full list of release highlights, see the Release Notes. | 17 September 2024 [2.16.0](https://github.com/opensearch-project/opensearch-build/blob/main/release-notes/opensearch-release-notes-2.16.0.md) | Includes built-in byte vector quantization and binary vector support in k-NN. Adds new sort, split, and ML inference search processors for search pipelines. Provides application-based configuration templates and additional plugins to integrate multiple data sources in OpenSearch Dashboards. Includes an experimental Batch Predict ML Commons API. For a full list of release highlights, see the Release Notes. | 06 August 2024 [2.15.0](https://github.com/opensearch-project/opensearch-build/blob/main/release-notes/opensearch-release-notes-2.15.0.md) | Includes parallel ingestion processing, SIMD support for exact search, and the ability to disable doc values for the k-NN field. Adds wildcard and derived field types. Improves performance for single-cardinality aggregations, rolling upgrades to remote-backed clusters, and more metrics for top N queries. For a full list of release highlights, see the Release Notes. | 25 June 2024 [2.14.0](https://github.com/opensearch-project/opensearch-build/blob/main/release-notes/opensearch-release-notes-2.14.0.md) | Includes performance improvements to hybrid search and date histogram queries with multi-range traversal, ML model integration within the Ingest API, semantic cache for LangChain applications, low-level vector query interface for neural sparse queries, and improved k-NN search filtering. Provides an experimental tiered cache feature. For a full list of release highlights, see the Release Notes. | 14 May 2024 From 6a4a37cd15a5700f30c8735309268cefe3ac963d Mon Sep 17 00:00:00 2001 From: Craig Perkins Date: Wed, 18 Sep 2024 14:53:24 -0400 Subject: [PATCH 075/111] Update text about default role.yml file in default distribution (#8334) * Update text about default role.yml file in default distribution Signed-off-by: Craig Perkins * Apply suggestions from code review Signed-off-by: Naarcha-AWS <97990722+Naarcha-AWS@users.noreply.github.com> --------- Signed-off-by: Craig Perkins Signed-off-by: Naarcha-AWS <97990722+Naarcha-AWS@users.noreply.github.com> Co-authored-by: Naarcha-AWS <97990722+Naarcha-AWS@users.noreply.github.com> --- _security/configuration/yaml.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/_security/configuration/yaml.md b/_security/configuration/yaml.md index 1686c8332e..2694e3a24f 100644 --- a/_security/configuration/yaml.md +++ b/_security/configuration/yaml.md @@ -265,7 +265,7 @@ kibana_server: ## roles.yml -This file contains any initial roles that you want to add to the Security plugin. Aside from some metadata, the default file is empty, because the Security plugin has a number of static roles that it adds automatically. +This file contains any initial roles that you want to add to the Security plugin. By default, this file contains predefined roles that grant usage to plugins within the default distribution of OpenSearch. The Security plugin will also add a number static roles automatically. ```yml --- From ac40282a790f4caf08e28925b9cd77cf2221f98f Mon Sep 17 00:00:00 2001 From: Smit Patel <39486815+patelsmit32123@users.noreply.github.com> Date: Thu, 19 Sep 2024 00:25:30 +0530 Subject: [PATCH 076/111] Improved shard allocation awareness attributes documentation (#8268) * Added more details regarding shard allocation awareness attributes Signed-off-by: Smit Patel * Update index.md Signed-off-by: Naarcha-AWS <97990722+Naarcha-AWS@users.noreply.github.com> * Addressed linter recommendations Signed-off-by: Smit Patel * Apply suggestions from code review Signed-off-by: Naarcha-AWS <97990722+Naarcha-AWS@users.noreply.github.com> * Update _tuning-your-cluster/index.md Co-authored-by: Nathan Bower Signed-off-by: Smit Patel <39486815+patelsmit32123@users.noreply.github.com> * Update _tuning-your-cluster/index.md Co-authored-by: Nathan Bower Signed-off-by: Smit Patel <39486815+patelsmit32123@users.noreply.github.com> * Update _tuning-your-cluster/index.md Co-authored-by: Nathan Bower Signed-off-by: Smit Patel <39486815+patelsmit32123@users.noreply.github.com> * Update _tuning-your-cluster/index.md Co-authored-by: Nathan Bower Signed-off-by: Smit Patel <39486815+patelsmit32123@users.noreply.github.com> * Update _tuning-your-cluster/index.md Co-authored-by: Nathan Bower Signed-off-by: Smit Patel <39486815+patelsmit32123@users.noreply.github.com> * Update _tuning-your-cluster/index.md Co-authored-by: Nathan Bower Signed-off-by: Smit Patel <39486815+patelsmit32123@users.noreply.github.com> * Update _tuning-your-cluster/index.md Co-authored-by: Nathan Bower Signed-off-by: Naarcha-AWS <97990722+Naarcha-AWS@users.noreply.github.com> * Update _tuning-your-cluster/index.md Co-authored-by: Nathan Bower Signed-off-by: Naarcha-AWS <97990722+Naarcha-AWS@users.noreply.github.com> * Update _tuning-your-cluster/index.md Co-authored-by: Nathan Bower Signed-off-by: Naarcha-AWS <97990722+Naarcha-AWS@users.noreply.github.com> --------- Signed-off-by: Smit Patel Signed-off-by: Naarcha-AWS <97990722+Naarcha-AWS@users.noreply.github.com> Signed-off-by: Smit Patel <39486815+patelsmit32123@users.noreply.github.com> Co-authored-by: Smit Patel Co-authored-by: Naarcha-AWS <97990722+Naarcha-AWS@users.noreply.github.com> Co-authored-by: Nathan Bower --- _tuning-your-cluster/index.md | 20 ++++++++++++++++++-- 1 file changed, 18 insertions(+), 2 deletions(-) diff --git a/_tuning-your-cluster/index.md b/_tuning-your-cluster/index.md index 99db78565f..a13e8c4fb4 100644 --- a/_tuning-your-cluster/index.md +++ b/_tuning-your-cluster/index.md @@ -192,11 +192,25 @@ To better understand and monitor your cluster, use the [CAT API]({{site.url}}{{s ## (Advanced) Step 6: Configure shard allocation awareness or forced awareness +To further fine-tune your shard allocation, you can set custom node attributes for shard allocation awareness or forced awareness. + ### Shard allocation awareness -If your nodes are spread across several geographical zones, you can configure shard allocation awareness to allocate all replica shards to a zone that’s different from their primary shard. +You can set custom node attributes on OpenSearch nodes to be used for shard allocation awareness. For example, you can set the `zone` attribute on each node to represent the zone in which the node is located. You can also use the `zone` attribute to ensure that the primary shard and its replica shards are allocated in a balanced manner across available, distinct zones. For example, maximum shard copies per zone would equal `ceil (number_of_shard_copies/number_of_distinct_zones)`. + +Shard allocation awareness attempts to separate primary and replica shards across multiple zones because 2 shard copies cannot be placed on the same node. When only 1 zone is available, such as after a zone failure, OpenSearch allocates replica shards to the only remaining zone. For example, if your index has a total of 5 shard copies (1 primary and 4 replicas) and nodes in 3 distinct zones, then OpenSearch will perform the following to allocate all 5 shard copies: + +- Allocate fewer than 2 shards per zone, which will require at least 2 nodes in 2 zones. +- Allocate the last shard in the third zone, with at least 1 node needed in the third zone. + +Alternatively, if you have 3 nodes in the first zone and 1 node in each remaining zone, then OpenSearch will allocate: -With shard allocation awareness, if the nodes in one of your zones fail, you can be assured that your replica shards are spread across your other zones. It adds a layer of fault tolerance to ensure your data survives a zone failure beyond just individual node failures. +- 2 shard copies in the first zone. +- 1 shard copy in the remaining 2 zones. + +The final shard copy will remain unallocated due to the lack of nodes. + +With shard allocation awareness, if the nodes in one of your zones fail, you can be assured that your replica shards are spread across your other zones, adding a layer of fault tolerance to ensure that your data survives zone failures. To configure shard allocation awareness, add zone attributes to `opensearch-d1` and `opensearch-d2`, respectively: @@ -219,6 +233,8 @@ PUT _cluster/settings } ``` +You can also use multiple attributes for shard allocation awareness by providing the attributes as a comma-separated string, for example, `zone,rack`. + You can either use `persistent` or `transient` settings. We recommend the `persistent` setting because it persists through a cluster reboot. Transient settings don't persist through a cluster reboot. Shard allocation awareness attempts to separate primary and replica shards across multiple zones. However, if only one zone is available (such as after a zone failure), OpenSearch allocates replica shards to the only remaining zone. From edc45b7945b205ea644cacf0767ae5569a54d463 Mon Sep 17 00:00:00 2001 From: Smit Patel <39486815+patelsmit32123@users.noreply.github.com> Date: Thu, 19 Sep 2024 17:56:20 +0530 Subject: [PATCH 077/111] Improve shard allocation awareness documentation - part 2 (#8339) * Improve shard allocation awareness documentation - part 2 Signed-off-by: Smit Patel * Apply suggestions from code review Signed-off-by: Naarcha-AWS <97990722+Naarcha-AWS@users.noreply.github.com> --------- Signed-off-by: Smit Patel Signed-off-by: Naarcha-AWS <97990722+Naarcha-AWS@users.noreply.github.com> Co-authored-by: Smit Patel Co-authored-by: Naarcha-AWS <97990722+Naarcha-AWS@users.noreply.github.com> --- _tuning-your-cluster/index.md | 9 +++++---- 1 file changed, 5 insertions(+), 4 deletions(-) diff --git a/_tuning-your-cluster/index.md b/_tuning-your-cluster/index.md index a13e8c4fb4..ca9a147ec2 100644 --- a/_tuning-your-cluster/index.md +++ b/_tuning-your-cluster/index.md @@ -196,11 +196,12 @@ To further fine-tune your shard allocation, you can set custom node attributes f ### Shard allocation awareness -You can set custom node attributes on OpenSearch nodes to be used for shard allocation awareness. For example, you can set the `zone` attribute on each node to represent the zone in which the node is located. You can also use the `zone` attribute to ensure that the primary shard and its replica shards are allocated in a balanced manner across available, distinct zones. For example, maximum shard copies per zone would equal `ceil (number_of_shard_copies/number_of_distinct_zones)`. +You can set custom node attributes on OpenSearch nodes to be used for shard allocation awareness. For example, you can set the `zone` attribute on each node to represent the zone in which the node is located. You can also use the `zone` attribute to ensure that the primary shard and its replica shards are allocated in a balanced manner across available, distinct zones. In this scenario, maximum shard copies per zone would equal `ceil (number_of_shard_copies/number_of_distinct_zones)`. -Shard allocation awareness attempts to separate primary and replica shards across multiple zones because 2 shard copies cannot be placed on the same node. When only 1 zone is available, such as after a zone failure, OpenSearch allocates replica shards to the only remaining zone. For example, if your index has a total of 5 shard copies (1 primary and 4 replicas) and nodes in 3 distinct zones, then OpenSearch will perform the following to allocate all 5 shard copies: +OpenSearch, by default, allocates shard copies of a single shard across different nodes. When only 1 zone is available, such as after a zone failure, OpenSearch allocates replica shards to the only remaining zone i.e. it considers only available zones (attribute values) for calculating maximum allowed shard copies per zone. +For example, if your index has a total of 5 shard copies (1 primary and 4 replicas) and nodes in 3 distinct zones, then OpenSearch will perform the following to allocate all 5 shard copies: -- Allocate fewer than 2 shards per zone, which will require at least 2 nodes in 2 zones. +- Allocate not more than 2 shards per zone, which will require at least 2 nodes in 2 zones. - Allocate the last shard in the third zone, with at least 1 node needed in the third zone. Alternatively, if you have 3 nodes in the first zone and 1 node in each remaining zone, then OpenSearch will allocate: @@ -208,7 +209,7 @@ Alternatively, if you have 3 nodes in the first zone and 1 node in each remainin - 2 shard copies in the first zone. - 1 shard copy in the remaining 2 zones. -The final shard copy will remain unallocated due to the lack of nodes. +The final shard copy will remain unallocated due to the lack of nodes. With shard allocation awareness, if the nodes in one of your zones fail, you can be assured that your replica shards are spread across your other zones, adding a layer of fault tolerance to ensure that your data survives zone failures. From 81a1355db417004766f4f395a3123257b40d0bdc Mon Sep 17 00:00:00 2001 From: Naarcha-AWS <97990722+Naarcha-AWS@users.noreply.github.com> Date: Thu, 19 Sep 2024 07:35:05 -0500 Subject: [PATCH 078/111] Update index.md (#8340) Signed-off-by: Naarcha-AWS <97990722+Naarcha-AWS@users.noreply.github.com> --- _tuning-your-cluster/index.md | 5 +++-- 1 file changed, 3 insertions(+), 2 deletions(-) diff --git a/_tuning-your-cluster/index.md b/_tuning-your-cluster/index.md index ca9a147ec2..fa0973395f 100644 --- a/_tuning-your-cluster/index.md +++ b/_tuning-your-cluster/index.md @@ -198,10 +198,11 @@ To further fine-tune your shard allocation, you can set custom node attributes f You can set custom node attributes on OpenSearch nodes to be used for shard allocation awareness. For example, you can set the `zone` attribute on each node to represent the zone in which the node is located. You can also use the `zone` attribute to ensure that the primary shard and its replica shards are allocated in a balanced manner across available, distinct zones. In this scenario, maximum shard copies per zone would equal `ceil (number_of_shard_copies/number_of_distinct_zones)`. -OpenSearch, by default, allocates shard copies of a single shard across different nodes. When only 1 zone is available, such as after a zone failure, OpenSearch allocates replica shards to the only remaining zone i.e. it considers only available zones (attribute values) for calculating maximum allowed shard copies per zone. +OpenSearch, by default, allocates shard copies of a single shard across different nodes. When only 1 zone is available, such as after a zone failure, OpenSearch allocates replica shards to the only remaining zone---it considers only available zones (attribute values) when calculating the maximum number of allowed shard copies per zone. + For example, if your index has a total of 5 shard copies (1 primary and 4 replicas) and nodes in 3 distinct zones, then OpenSearch will perform the following to allocate all 5 shard copies: -- Allocate not more than 2 shards per zone, which will require at least 2 nodes in 2 zones. +- Allocate no more than 2 shards per zone, which will require at least 2 nodes in 2 zones. - Allocate the last shard in the third zone, with at least 1 node needed in the third zone. Alternatively, if you have 3 nodes in the first zone and 1 node in each remaining zone, then OpenSearch will allocate: From 9230b00ac05ee741fc5ce0c463aea203281ab4fb Mon Sep 17 00:00:00 2001 From: leanneeliatra <131779422+leanneeliatra@users.noreply.github.com> Date: Thu, 19 Sep 2024 13:37:37 +0100 Subject: [PATCH 079/111] Enhancing Security configuration steps (#8058) * wip building out the security configuration steps Signed-off-by: leanne.laceybyrne@eliatra.com * adding relevant links to docs. Signed-off-by: leanne.laceybyrne@eliatra.com * adding further info to security settings Signed-off-by: leanne.laceybyrne@eliatra.com * reviewdog issues fixed Signed-off-by: leanne.laceybyrne@eliatra.com * paths given for 1.0 securityadmin Signed-off-by: leanne.laceybyrne@eliatra.com * Reconfiguring layout Signed-off-by: leanne.laceybyrne@eliatra.com * updating security configuraton Signed-off-by: leanne.laceybyrne@eliatra.com * Update _security/configuration/index.md Co-authored-by: Craig Perkins Signed-off-by: leanneeliatra <131779422+leanneeliatra@users.noreply.github.com> * Updates for examples given in config doc. Signed-off-by: leanne.laceybyrne@eliatra.com Signed-off-by: leanne.laceybyrne@eliatra.com * Add doc review Signed-off-by: Naarcha-AWS <97990722+Naarcha-AWS@users.noreply.github.com> * Update index.md Signed-off-by: Naarcha-AWS <97990722+Naarcha-AWS@users.noreply.github.com> * Delete _security/configuration/test Signed-off-by: Naarcha-AWS <97990722+Naarcha-AWS@users.noreply.github.com> * Apply suggestions from code review Signed-off-by: Naarcha-AWS <97990722+Naarcha-AWS@users.noreply.github.com> * Made the securityadmin.sh backup tool instructions clearer Signed-off-by: leanne.laceybyrne@eliatra.com Signed-off-by: leanne.laceybyrne@eliatra.com * Apply suggestions from code review Signed-off-by: Naarcha-AWS <97990722+Naarcha-AWS@users.noreply.github.com> * Update _security/configuration/index.md Signed-off-by: Naarcha-AWS <97990722+Naarcha-AWS@users.noreply.github.com> * Apply suggestions from code review Signed-off-by: Naarcha-AWS <97990722+Naarcha-AWS@users.noreply.github.com> * Apply suggestions from code review Signed-off-by: Naarcha-AWS <97990722+Naarcha-AWS@users.noreply.github.com> * updating the command for the securityadmin tool Signed-off-by: leanne.laceybyrne@eliatra.com * reviewdog updates Signed-off-by: leanne.laceybyrne@eliatra.com * Apply suggestions from code review Co-authored-by: Nathan Bower Signed-off-by: Naarcha-AWS <97990722+Naarcha-AWS@users.noreply.github.com> * Apply suggestions from code review Co-authored-by: Nathan Bower Signed-off-by: leanneeliatra <131779422+leanneeliatra@users.noreply.github.com> * removing headings as links Signed-off-by: leanne.laceybyrne@eliatra.com * Updating headings to be headings and adding extra links at the end of the text, as is the standard (not to have hyperlinked headings). Signed-off-by: leanne.laceybyrne@eliatra.com * Apply suggestions from code review Co-authored-by: Nathan Bower Signed-off-by: Naarcha-AWS <97990722+Naarcha-AWS@users.noreply.github.com> * Update index.md Signed-off-by: Naarcha-AWS <97990722+Naarcha-AWS@users.noreply.github.com> * Apply suggestions from code review Signed-off-by: Naarcha-AWS <97990722+Naarcha-AWS@users.noreply.github.com> --------- Signed-off-by: leanne.laceybyrne@eliatra.com Signed-off-by: leanneeliatra <131779422+leanneeliatra@users.noreply.github.com> Signed-off-by: Naarcha-AWS <97990722+Naarcha-AWS@users.noreply.github.com> Co-authored-by: Craig Perkins Co-authored-by: Melissa Vagi Co-authored-by: Naarcha-AWS <97990722+Naarcha-AWS@users.noreply.github.com> Co-authored-by: Nathan Bower --- _security/configuration/index.md | 110 +++++++++++++++++++++++++++---- 1 file changed, 97 insertions(+), 13 deletions(-) diff --git a/_security/configuration/index.md b/_security/configuration/index.md index e351e8865f..f68667d92d 100644 --- a/_security/configuration/index.md +++ b/_security/configuration/index.md @@ -3,7 +3,7 @@ layout: default title: Configuration nav_order: 2 has_children: true -has_toc: false +has_toc: true redirect_from: - /security-plugin/configuration/ - /security-plugin/configuration/index/ @@ -11,21 +11,105 @@ redirect_from: # Security configuration -The plugin includes demo certificates so that you can get up and running quickly. To use OpenSearch in a production environment, you must configure it manually: +The Security plugin includes demo certificates so that you can get up and running quickly. To use OpenSearch with the Security plugin in a production environment, you must make changes to the demo certificates and other configuration options manually. -1. [Replace the demo certificates]({{site.url}}{{site.baseurl}}/install-and-configure/install-opensearch/docker/#configuring-basic-security-settings). -1. [Reconfigure `opensearch.yml` to use your certificates]({{site.url}}{{site.baseurl}}/security/configuration/tls). -1. [Reconfigure `config.yml` to use your authentication backend]({{site.url}}{{site.baseurl}}/security/configuration/configuration/) (if you don't plan to use the internal user database). -1. [Modify the configuration YAML files]({{site.url}}{{site.baseurl}}/security/configuration/yaml). -1. If you plan to use the internal user database, [set a password policy in `opensearch.yml`]({{site.url}}{{site.baseurl}}/security/configuration/yaml/#opensearchyml). -1. [Apply changes using the `securityadmin` script]({{site.url}}{{site.baseurl}}/security/configuration/security-admin). -1. Start OpenSearch. -1. [Add users, roles, role mappings, and tenants]({{site.url}}{{site.baseurl}}/security/access-control/index/). +## Replace the demo certificates -If you don't want to use the plugin, see [Disable security]({{site.url}}{{site.baseurl}}/security/configuration/disable-enable-security/). +OpenSearch ships with demo certificates intended for quick setup and demonstration purposes. For a production environment, it's critical to replace these with your own trusted certificates, using the following steps, to ensure secure communication: -The Security plugin has several default users, roles, action groups, permissions, and settings for OpenSearch Dashboards that use kibana in their names. We will change these names in a future release. +1. **Generate your own certificates:** Use tools like OpenSSL or a certificate authority (CA) to generate your own certificates. For more information about generating certificates with OpenSSL, see [Generating self-signed certificates]({{site.url}}{{site.baseurl}}/security/configuration/generate-certificates/). +2. **Store the generated certificates and private key in the appropriate directory:** Generated certificates are typically stored in `/config/`. For more information, see [Add certificate files to opensearch.yml]({{site.url}}{{site.baseurl}}/security/configuration/generate-certificates/#add-certificate-files-to-opensearchyml). +3. **Set the following file permissions:** + - Private key (.key files): Set the file mode to `600`. This restricts access so that only the file owner (the OpenSearch user) can read and write to the file, ensuring that the private key remains secure and inaccessible to unauthorized users. + - Public certificates (.crt, .pem files): Set the file mode to `644`. This allows the file owner to read and write to the file, while other users can only read it. + +For additional guidance on file modes, see the following table. + + | Item | Sample | Numeric | Bitwise | + |-------------|---------------------|---------|--------------| + | Public key | `~/.ssh/id_rsa.pub` | `644` | `-rw-r--r--` | + | Private key | `~/.ssh/id_rsa` | `600` | `-rw-------` | + | SSH folder | `~/.ssh` | `700` | `drwx------` | + +For more information, see [Configuring basic security settings]({{site.url}}{{site.baseurl}}/install-and-configure/install-opensearch/docker/#configuring-basic-security-settings). + +## Reconfigure `opensearch.yml` to use your certificates + +The `opensearch.yml` file is the main configuration file for OpenSearch; you can find the file at `/config/opensearch.yml`. Use the following steps to update this file to point to your custom certificates: + +In `opensearch.yml`, set the correct paths for your certificates and keys, as shown in the following example: + ``` + plugins.security.ssl.transport.pemcert_filepath: /path/to/your/cert.pem + plugins.security.ssl.transport.pemkey_filepath: /path/to/your/key.pem + plugins.security.ssl.transport.pemtrustedcas_filepath: /path/to/your/ca.pem + plugins.security.ssl.http.enabled: true + plugins.security.ssl.http.pemcert_filepath: /path/to/your/cert.pem + plugins.security.ssl.http.pemkey_filepath: /path/to/your/key.pem + plugins.security.ssl.http.pemtrustedcas_filepath: /path/to/your/ca.pem + ``` +For more information, see [Configuring TLS certificates]({{site.url}}{{site.baseurl}}/security/configuration/tls/). + +## Reconfigure `config.yml` to use your authentication backend + +The `config.yml` file allows you to configure the authentication and authorization mechanisms for OpenSearch. Update the authentication backend settings in `/config/opensearch-security/config.yml` according to your requirements. + +For example, to use LDAP as your authentication backend, add the following settings: + + ``` + authc: + basic_internal_auth: + http_enabled: true + transport_enabled: true + order: 1 + http_authenticator: + type: basic + challenge: true + authentication_backend: + type: internal + ``` +For more information, see [Configuring the Security backend]({{site.url}}{{site.baseurl}}/security/configuration/configuration/). + +## Modify the configuration YAML files + +Determine whether any additional YAML files need modification, for example, the `roles.yml`, `roles_mapping.yml`, or `internal_users.yml` files. Update the files with any additional configuration information. For more information, see [Modifying the YAML files]({{site.url}}{{site.baseurl}}/security/configuration/yaml/). + +## Set a password policy + +When using the internal user database, we recommend enforcing a password policy to ensure that strong passwords are used. For information about strong password policies, see [Password settings]({{site.url}}{{site.baseurl}}/security/configuration/yaml/#password-settings). + +## Apply changes using the `securityadmin` script + +The following steps do not apply to first-time users because the security index is automatically initialized from the YAML configuration files when OpenSearch starts. +{: .note} + +After initial setup, if you make changes to your security configuration or disable automatic initialization by setting `plugins.security.allow_default_init_securityindex` to `false` (which prevents security index initialization from `yaml` files), you need to manually apply changes using the `securityadmin` script: + +1. Find the `securityadmin` script. The script is typically stored in the OpenSearch plugins directory, `plugins/opensearch-security/tools/securityadmin.[sh|bat]`. + - Note: If you're using OpenSearch 1.x, the `securityadmin` script is located in the `plugins/opendistro_security/tools/` directory. + - For more information, see [Basic usage](https://opensearch.org/docs/latest/security/configuration/security-admin/#basic-usage). +2. Run the script by using the following command: + ``` + ./plugins/opensearch-security/tools/securityadmin.[sh|bat] + ``` +3. Check the OpenSearch logs and configuration to ensure that the changes have been successfully applied. + +For more information about using the `securityadmin` script, see [Applying changes to configuration files]({{site.url}}{{site.baseurl}}/security/configuration/security-admin/). + +## Add users, roles, role mappings, and tenants + +If you don't want to use the Security plugin, you can disable it by adding the following setting to the `opensearch.yml` file: + +``` +plugins.security.disabled: true +``` + +You can then enable the plugin by removing the `plugins.security.disabled` setting. + +For more information about disabling the Security plugin, see [Disable security]({{site.url}}{{site.baseurl}}/security/configuration/disable-enable-security/). + +The Security plugin has several default users, roles, action groups, permissions, and settings for OpenSearch Dashboards that contain "Kibana" in their names. We will change these names in a future version. {: .note } -For a full list of `opensearch.yml` Security plugin settings, Security plugin settings, see [Security settings]({{site.url}}{{site.baseurl}}/install-and-configure/configuring-opensearch/security-settings/). +For a full list of `opensearch.yml` Security plugin settings, see [Security settings]({{site.url}}{{site.baseurl}}/install-and-configure/configuring-opensearch/security-settings/). {: .note} + From 8275be3f457b33dea65b5e0eb4990766342fef35 Mon Sep 17 00:00:00 2001 From: John Mazanec Date: Fri, 20 Sep 2024 08:53:25 -0400 Subject: [PATCH 080/111] Add doc on disk-based vector search (#8332) * Add doc on disk-based vector search Signed-off-by: John Mazanec * Add training example Signed-off-by: John Mazanec * Address comments Signed-off-by: John Mazanec * Doc review Signed-off-by: Fanit Kolchina * Typo Signed-off-by: Fanit Kolchina * Another typo Signed-off-by: Fanit Kolchina * Apply suggestions from code review Co-authored-by: Nathan Bower Signed-off-by: kolchfa-aws <105444904+kolchfa-aws@users.noreply.github.com> --------- Signed-off-by: John Mazanec Signed-off-by: Fanit Kolchina Signed-off-by: kolchfa-aws <105444904+kolchfa-aws@users.noreply.github.com> Co-authored-by: Fanit Kolchina Co-authored-by: kolchfa-aws <105444904+kolchfa-aws@users.noreply.github.com> Co-authored-by: Nathan Bower --- .../knn/disk-based-vector-search.md | 193 ++++++++++++++++++ 1 file changed, 193 insertions(+) create mode 100644 _search-plugins/knn/disk-based-vector-search.md diff --git a/_search-plugins/knn/disk-based-vector-search.md b/_search-plugins/knn/disk-based-vector-search.md new file mode 100644 index 0000000000..790dda11b1 --- /dev/null +++ b/_search-plugins/knn/disk-based-vector-search.md @@ -0,0 +1,193 @@ +--- +layout: default +title: Disk-based vector search +nav_order: 16 +parent: k-NN search +has_children: false +--- + +# Disk-based vector search +**Introduced 2.17** +{: .label .label-purple} + +For low-memory environments, OpenSearch provides _disk-based vector search_, which significantly reduces the operational costs for vector workloads. Disk-based vector search uses [binary quantization]({{site.url}}{{site.baseurl}}/search-plugins/knn/knn-vector-quantization/#binary-quantization), compressing vectors and thereby reducing the memory requirements. This memory optimization provides large memory savings at the cost of slightly increased search latency while still maintaining strong recall. + +To use disk-based vector search, set the [`mode`]({{site.url}}{{site.baseurl}}/field-types/supported-field-types/knn-vector/#vector-workload-modes) parameter to `on_disk` for your vector field type. This parameter will configure your index to use secondary storage. + +## Creating an index for disk-based vector search + +To create an index for disk-based vector search, send the following request: + +```json +PUT my-vector-index +{ + "mappings": { + "properties": { + "my_vector_field": { + "type": "knn_vector", + "dimension": 8, + "space_type": "innerproduct", + "data_type": "float", + "mode": "on_disk" + } + } + } +} +``` +{% include copy-curl.html %} + +By default, the `on_disk` mode configures the index to use the `faiss` engine and `hnsw` method. The default [`compression_level`]({{site.url}}{{site.baseurl}}/field-types/supported-field-types/knn-vector/#compression-levels) of `32x` reduces the amount of memory the vectors require by a factor of 32. To preserve the search recall, rescoring is enabled by default. A search on a disk-optimized index runs in two phases: The compressed index is searched first, and then the results are rescored using full-precision vectors loaded from disk. + +To reduce the compression level, provide the `compression_level` parameter when creating the index mapping: + +```json +PUT my-vector-index +{ + "mappings": { + "properties": { + "my_vector_field": { + "type": "knn_vector", + "dimension": 8, + "space_type": "innerproduct", + "data_type": "float", + "mode": "on_disk", + "compression_level": "16x" + } + } + } +} +``` +{% include copy-curl.html %} + +For more information about the `compression_level` parameter, see [Compression levels]({{site.url}}{{site.baseurl}}/field-types/supported-field-types/knn-vector/#compression-levels). Note that for `4x` compression, the `lucene` engine will be used. +{: .note} + +If you need more granular fine-tuning, you can override additional k-NN parameters in the method definition. For example, to improve recall, increase the `ef_construction` parameter value: + +```json +PUT my-vector-index +{ + "mappings": { + "properties": { + "my_vector_field": { + "type": "knn_vector", + "dimension": 8, + "space_type": "innerproduct", + "data_type": "float", + "mode": "on_disk", + "method": { + "params": { + "ef_construction": 512 + } + } + } + } + } +} +``` +{% include copy-curl.html %} + +The `on_disk` mode only works with the `float` data type. +{: .note} + +## Ingestion + +You can perform document ingestion for a disk-optimized vector index in the same way as for a regular vector index. To index several documents in bulk, send the following request: + +```json +POST _bulk +{ "index": { "_index": "my-vector-index", "_id": "1" } } +{ "my_vector_field": [1.5, 1.5, 1.5, 1.5, 1.5, 1.5, 1.5, 1.5], "price": 12.2 } +{ "index": { "_index": "my-vector-index", "_id": "2" } } +{ "my_vector_field": [2.5, 2.5, 2.5, 2.5, 2.5, 2.5, 2.5, 2.5], "price": 7.1 } +{ "index": { "_index": "my-vector-index", "_id": "3" } } +{ "my_vector_field": [3.5, 3.5, 3.5, 3.5, 3.5, 3.5, 3.5, 3.5], "price": 12.9 } +{ "index": { "_index": "my-vector-index", "_id": "4" } } +{ "my_vector_field": [4.5, 4.5, 4.5, 4.5, 4.5, 4.5, 4.5, 4.5], "price": 1.2 } +{ "index": { "_index": "my-vector-index", "_id": "5" } } +{ "my_vector_field": [5.5, 5.5, 5.5, 5.5, 5.5, 5.5, 5.5, 5.5], "price": 3.7 } +{ "index": { "_index": "my-vector-index", "_id": "6" } } +{ "my_vector_field": [6.5, 6.5, 6.5, 6.5, 6.5, 6.5, 6.5, 6.5], "price": 10.3 } +{ "index": { "_index": "my-vector-index", "_id": "7" } } +{ "my_vector_field": [7.5, 7.5, 7.5, 7.5, 7.5, 7.5, 7.5, 7.5], "price": 5.5 } +{ "index": { "_index": "my-vector-index", "_id": "8" } } +{ "my_vector_field": [8.5, 8.5, 8.5, 8.5, 8.5, 8.5, 8.5, 8.5], "price": 4.4 } +{ "index": { "_index": "my-vector-index", "_id": "9" } } +{ "my_vector_field": [9.5, 9.5, 9.5, 9.5, 9.5, 9.5, 9.5, 9.5], "price": 8.9 } +``` +{% include copy-curl.html %} + +## Search + +Search is also performed in the same way as in other index configurations. The key difference is that, by default, the `oversample_factor` of the rescore parameter is set to `3.0` (unless you override the `compression_level`). For more information, see [Rescoring quantized results using full precision]({{site.url}}{{site.baseurl}}/search-plugins/knn/approximate-knn/#rescoring-quantized-results-using-full-precision). To perform vector search on a disk-optimized index, provide the search vector: + +```json +GET my-vector-index/_search +{ + "query": { + "knn": { + "my_vector_field": { + "vector": [1.5, 2.5, 3.5, 4.5, 5.5, 6.5, 7.5, 8.5], + "k": 5 + } + } + } +} +``` +{% include copy-curl.html %} + +Similarly to other index configurations, you can override k-NN parameters in the search request: + +```json +GET my-vector-index/_search +{ + "query": { + "knn": { + "my_vector_field": { + "vector": [1.5, 2.5, 3.5, 4.5, 5.5, 6.5, 7.5, 8.5], + "k": 5, + "method_params": { + "ef_search": 512 + }, + "rescore": { + "oversample_factor": 10.0 + } + } + } + } +} +``` +{% include copy-curl.html %} + +[Radial search]({{site.url}}{{site.baseurl}}/search-plugins/knn/radial-search-knn/) does not support disk-based vector search. +{: .note} + +## Model-based indexes + +For [model-based indexes]({{site.url}}{{site.baseurl}}/search-plugins/knn/approximate-knn/#building-a-k-nn-index-from-a-model), you can specify the `on_disk` parameter in the training request in the same way that you would specify it during index creation. By default, `on_disk` mode will use the [Faiss IVF method]({{site.url}}{{site.baseurl}}/search-plugins/knn/knn-index/#supported-faiss-methods) and a compression level of `32x`. To run the training API, send the following request: + +```json +POST /_plugins/_knn/models/_train/test-model +{ + "training_index": "train-index-name", + "training_field": "train-field-name", + "dimension": 8, + "max_training_vector_count": 1200, + "search_size": 100, + "description": "My model", + "space_type": "innerproduct", + "mode": "on_disk" +} +``` +{% include copy-curl.html %} + +This command assumes that training data has been ingested into the `train-index-name` index. For more information, see [Building a k-NN index from a model]({{site.url}}{{site.baseurl}}/search-plugins/knn/approximate-knn/#building-a-k-nn-index-from-a-model). +{: .note} + +You can override the `compression_level` for disk-optimized indexes in the same way as for regular k-NN indexes. + + +## Next steps + +- For more information about binary quantization, see [Binary quantization]({{site.url}}{{site.baseurl}}/search-plugins/knn/knn-vector-quantization/#binary-quantization). +- For more information about k-NN vector workload modes, see [Vector workload modes]({{site.url}}{{site.baseurl}}/field-types/supported-field-types/knn-vector/#vector-workload-modes). \ No newline at end of file From 7d3a3f58dab634f51d5425a49ddedea25132b197 Mon Sep 17 00:00:00 2001 From: Naarcha-AWS <97990722+Naarcha-AWS@users.noreply.github.com> Date: Fri, 20 Sep 2024 11:14:21 -0500 Subject: [PATCH 081/111] Add Benchmark glossary (#8190) * Add Benchmark glossary Signed-off-by: Archer * Add text Signed-off-by: Archer * Apply suggestions from code review Signed-off-by: Naarcha-AWS <97990722+Naarcha-AWS@users.noreply.github.com> * Apply suggestions from code review Signed-off-by: Naarcha-AWS <97990722+Naarcha-AWS@users.noreply.github.com> * Apply suggestions from code review Co-authored-by: kolchfa-aws <105444904+kolchfa-aws@users.noreply.github.com> Signed-off-by: Naarcha-AWS <97990722+Naarcha-AWS@users.noreply.github.com> * Apply suggestions from code review Signed-off-by: Naarcha-AWS <97990722+Naarcha-AWS@users.noreply.github.com> * Apply suggestions from code review Co-authored-by: Nathan Bower Signed-off-by: Naarcha-AWS <97990722+Naarcha-AWS@users.noreply.github.com> * Apply suggestions from code review Signed-off-by: Naarcha-AWS <97990722+Naarcha-AWS@users.noreply.github.com> --------- Signed-off-by: Archer Signed-off-by: Naarcha-AWS <97990722+Naarcha-AWS@users.noreply.github.com> Co-authored-by: kolchfa-aws <105444904+kolchfa-aws@users.noreply.github.com> Co-authored-by: Nathan Bower --- _benchmark/glossary.md | 21 +++++++++++++++++++++ 1 file changed, 21 insertions(+) create mode 100644 _benchmark/glossary.md diff --git a/_benchmark/glossary.md b/_benchmark/glossary.md new file mode 100644 index 0000000000..f86591d3d9 --- /dev/null +++ b/_benchmark/glossary.md @@ -0,0 +1,21 @@ +--- +layout: default +title: Glossary +nav_order: 10 +--- + +# OpenSearch Benchmark glossary + +The following terms are commonly used in OpenSearch Benchmark: + +- **Corpora**: A collection of documents. +- **Latency**: If `target-throughput` is disabled (has no value or a value of `0)`, then latency is equal to service time. If `target-throughput` is enabled (has a value of 1 or greater), then latency is equal to the service time plus the amount of time the request waits in the queue before being sent. +- **Metric keys**: The metrics stored by OpenSearch Benchmark, based on the configuration in the [metrics record]({{site.url}}{{site.baseurl}}/benchmark/metrics/metric-records/). +- **Operations**: In workloads, a list of API operations performed by a workload. +- **Pipeline**: A series of steps occurring both before and after running a workload that determines benchmark results. +- **Schedule**: A list of two or more operations performed in the order they appear when a workload is run. +- **Service time**: The amount of time taken for `opensearch-py`, the primary client for OpenSearch Benchmark, to send a request and receive a response from the OpenSearch cluster. It includes the amount of time taken for the server to process a request as well as for network latency, load balancer overhead, and deserialization/serialization. +- **Summary report**: A report generated at the end of a test based on the metric keys defined in the workload. +- **Test**: A single invocation of the OpenSearch Benchmark binary. +- **Throughput**: The number of operations completed in a given period of time. +- **Workload**: A collection of one or more benchmarking tests that use a specific document corpus to perform a benchmark against a cluster. The document corpus contains any indexes, data files, or operations invoked when the workload runs. \ No newline at end of file From 55aeb08bed297ad87259a9d1b36f4d82789c19bf Mon Sep 17 00:00:00 2001 From: Naarcha-AWS <97990722+Naarcha-AWS@users.noreply.github.com> Date: Mon, 23 Sep 2024 14:03:01 -0500 Subject: [PATCH 082/111] Fix broken links (#8352) * Fix broken links Signed-off-by: Archer * Fix one more link Signed-off-by: Archer --------- Signed-off-by: Archer --- _clients/index.md | 4 ++-- _field-types/supported-field-types/knn-vector.md | 4 ++-- _install-and-configure/plugins.md | 2 +- _monitoring-your-cluster/pa/index.md | 2 +- 4 files changed, 6 insertions(+), 6 deletions(-) diff --git a/_clients/index.md b/_clients/index.md index fc8c23d912..a15f0539d2 100644 --- a/_clients/index.md +++ b/_clients/index.md @@ -53,8 +53,8 @@ To view the compatibility matrix for a specific client, see the `COMPATIBILITY.m Client | Recommended version :--- | :--- -[Elasticsearch Java low-level REST client](https://search.maven.org/artifact/org.elasticsearch.client/elasticsearch-rest-client/7.13.4/jar) | 7.13.4 -[Elasticsearch Java high-level REST client](https://search.maven.org/artifact/org.elasticsearch.client/elasticsearch-rest-high-level-client/7.13.4/jar) | 7.13.4 +[Elasticsearch Java low-level REST client](https://central.sonatype.com/artifact/org.elasticsearch.client/elasticsearch-rest-client/7.13.4) | 7.13.4 +[Elasticsearch Java high-level REST client](https://central.sonatype.com/artifact/org.elasticsearch.client/elasticsearch-rest-high-level-client/7.13.4) | 7.13.4 [Elasticsearch Python client](https://pypi.org/project/elasticsearch/7.13.4/) | 7.13.4 [Elasticsearch Node.js client](https://www.npmjs.com/package/@elastic/elasticsearch/v/7.13.0) | 7.13.0 [Elasticsearch Ruby client](https://rubygems.org/gems/elasticsearch/versions/7.13.0) | 7.13.0 diff --git a/_field-types/supported-field-types/knn-vector.md b/_field-types/supported-field-types/knn-vector.md index 4c00b94de8..da784aeefe 100644 --- a/_field-types/supported-field-types/knn-vector.md +++ b/_field-types/supported-field-types/knn-vector.md @@ -175,7 +175,7 @@ By default, k-NN vectors are `float` vectors, in which each dimension is 4 bytes Byte vectors are supported only for the `lucene` and `faiss` engines. They are not supported for the `nmslib` engine. {: .note} -In [k-NN benchmarking tests](https://github.com/opensearch-project/k-NN/tree/main/benchmarks/perf-tool), the use of `byte` rather than `float` vectors resulted in a significant reduction in storage and memory usage as well as improved indexing throughput and reduced query latency. Additionally, precision on recall was not greatly affected (note that recall can depend on various factors, such as the [quantization technique](#quantization-techniques) and data distribution). +In [k-NN benchmarking tests](https://github.com/opensearch-project/opensearch-benchmark-workloads/tree/main/vectorsearch), the use of `byte` rather than `float` vectors resulted in a significant reduction in storage and memory usage as well as improved indexing throughput and reduced query latency. Additionally, precision on recall was not greatly affected (note that recall can depend on various factors, such as the [quantization technique](#quantization-techniques) and data distribution). When using `byte` vectors, expect some loss of precision in the recall compared to using `float` vectors. Byte vectors are useful in large-scale applications and use cases that prioritize a reduced memory footprint in exchange for a minimal loss of recall. {: .important} @@ -411,7 +411,7 @@ As an example, assume that you have 1 million vectors with a dimension of 256 an ### Quantization techniques -If your vectors are of the type `float`, you need to first convert them to the `byte` type before ingesting the documents. This conversion is accomplished by _quantizing the dataset_---reducing the precision of its vectors. There are many quantization techniques, such as scalar quantization or product quantization (PQ), which is used in the Faiss engine. The choice of quantization technique depends on the type of data you're using and can affect the accuracy of recall values. The following sections describe the scalar quantization algorithms that were used to quantize the [k-NN benchmarking test](https://github.com/opensearch-project/k-NN/tree/main/benchmarks/perf-tool) data for the [L2](#scalar-quantization-for-the-l2-space-type) and [cosine similarity](#scalar-quantization-for-the-cosine-similarity-space-type) space types. The provided pseudocode is for illustration purposes only. +If your vectors are of the type `float`, you need to first convert them to the `byte` type before ingesting the documents. This conversion is accomplished by _quantizing the dataset_---reducing the precision of its vectors. There are many quantization techniques, such as scalar quantization or product quantization (PQ), which is used in the Faiss engine. The choice of quantization technique depends on the type of data you're using and can affect the accuracy of recall values. The following sections describe the scalar quantization algorithms that were used to quantize the [k-NN benchmarking test](https://github.com/opensearch-project/opensearch-benchmark-workloads/tree/main/vectorsearch) data for the [L2](#scalar-quantization-for-the-l2-space-type) and [cosine similarity](#scalar-quantization-for-the-cosine-similarity-space-type) space types. The provided pseudocode is for illustration purposes only. #### Scalar quantization for the L2 space type diff --git a/_install-and-configure/plugins.md b/_install-and-configure/plugins.md index 3a5d6a1834..e96b29e822 100644 --- a/_install-and-configure/plugins.md +++ b/_install-and-configure/plugins.md @@ -181,7 +181,7 @@ Continue with installation? [y/N]y ### Install a plugin using Maven coordinates -The `opensearch-plugin install` tool also allows you to specify Maven coordinates for available artifacts and versions hosted on [Maven Central](https://search.maven.org/search?q=org.opensearch.plugin). The tool parses the Maven coordinates you provide and constructs a URL. As a result, the host must be able to connect directly to the Maven Central site. The plugin installation fails if you pass coordinates to a proxy or local repository. +The `opensearch-plugin install` tool also allows you to specify Maven coordinates for available artifacts and versions hosted on [Maven Central](https://central.sonatype.com/namespace/org.opensearch.plugin). The tool parses the Maven coordinates you provide and constructs a URL. As a result, the host must be able to connect directly to the Maven Central site. The plugin installation fails if you pass coordinates to a proxy or local repository. #### Usage ```bash diff --git a/_monitoring-your-cluster/pa/index.md b/_monitoring-your-cluster/pa/index.md index bb4f9c6c30..156e985e8b 100644 --- a/_monitoring-your-cluster/pa/index.md +++ b/_monitoring-your-cluster/pa/index.md @@ -60,7 +60,7 @@ private-key-file-path = specify_path The Performance Analyzer plugin is included in the installations for [Docker]({{site.url}}{{site.baseurl}}/opensearch/install/docker/) and [tarball]({{site.url}}{{site.baseurl}}/opensearch/install/tar/), but you can also install the plugin manually. -To install the Performance Analyzer plugin manually, download the plugin from [Maven](https://search.maven.org/search?q=org.opensearch.plugin) and install it using the standard [plugin installation]({{site.url}}{{site.baseurl}}/opensearch/install/plugins/) process. Performance Analyzer runs on each node in a cluster. +To install the Performance Analyzer plugin manually, download the plugin from [Maven](https://central.sonatype.com/namespace/org.opensearch.plugin) and install it using the standard [plugin installation]({{site.url}}{{site.baseurl}}/opensearch/install/plugins/) process. Performance Analyzer runs on each node in a cluster. To start the Performance Analyzer root cause analysis (RCA) agent on a tarball installation, run the following command: From d9829d72c3315eb8a56f2a371ce39fe2cbf4cc3a Mon Sep 17 00:00:00 2001 From: Ryan Bogan Date: Tue, 24 Sep 2024 07:10:16 -0700 Subject: [PATCH 083/111] Fix k-NN search json to have correct query (#8358) Signed-off-by: Ryan Bogan --- _search-plugins/knn/disk-based-vector-search.md | 2 +- _search-plugins/knn/knn-vector-quantization.md | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-) diff --git a/_search-plugins/knn/disk-based-vector-search.md b/_search-plugins/knn/disk-based-vector-search.md index 790dda11b1..82da30a0ac 100644 --- a/_search-plugins/knn/disk-based-vector-search.md +++ b/_search-plugins/knn/disk-based-vector-search.md @@ -146,7 +146,7 @@ GET my-vector-index/_search "my_vector_field": { "vector": [1.5, 2.5, 3.5, 4.5, 5.5, 6.5, 7.5, 8.5], "k": 5, - "method_params": { + "method_parameters": { "ef_search": 512 }, "rescore": { diff --git a/_search-plugins/knn/knn-vector-quantization.md b/_search-plugins/knn/knn-vector-quantization.md index 508f9e6535..a911dc91c9 100644 --- a/_search-plugins/knn/knn-vector-quantization.md +++ b/_search-plugins/knn/knn-vector-quantization.md @@ -436,7 +436,7 @@ GET my-vector-index/_search "my_vector_field": { "vector": [1.5, 5.5, 1.5, 5.5, 1.5, 5.5, 1.5, 5.5], "k": 10, - "method_params": { + "method_parameters": { "ef_search": 10 }, "rescore": { From 19cdfa3b6ea98c8276a10629e875117a6556b7bf Mon Sep 17 00:00:00 2001 From: kolchfa-aws <105444904+kolchfa-aws@users.noreply.github.com> Date: Tue, 24 Sep 2024 11:24:29 -0400 Subject: [PATCH 084/111] Add has_child query (#8354) * Add has_child query Signed-off-by: Fanit Kolchina * Rename parameter table header Signed-off-by: Fanit Kolchina * Update _query-dsl/joining/has-child.md Co-authored-by: Naarcha-AWS <97990722+Naarcha-AWS@users.noreply.github.com> Signed-off-by: kolchfa-aws <105444904+kolchfa-aws@users.noreply.github.com> * Apply suggestions from code review Co-authored-by: Nathan Bower Signed-off-by: kolchfa-aws <105444904+kolchfa-aws@users.noreply.github.com> --------- Signed-off-by: Fanit Kolchina Signed-off-by: kolchfa-aws <105444904+kolchfa-aws@users.noreply.github.com> Co-authored-by: Naarcha-AWS <97990722+Naarcha-AWS@users.noreply.github.com> Co-authored-by: Nathan Bower --- _field-types/supported-field-types/join.md | 2 +- _query-dsl/geo-and-xy/geo-bounding-box.md | 6 +- _query-dsl/geo-and-xy/geodistance.md | 6 +- _query-dsl/geo-and-xy/geopolygon.md | 6 +- _query-dsl/geo-and-xy/geoshape.md | 6 +- _query-dsl/joining/has-child.md | 259 +++++++++++++++++++++ _query-dsl/joining/index.md | 5 +- 7 files changed, 275 insertions(+), 15 deletions(-) create mode 100644 _query-dsl/joining/has-child.md diff --git a/_field-types/supported-field-types/join.md b/_field-types/supported-field-types/join.md index c707c66774..1c5b0d1322 100644 --- a/_field-types/supported-field-types/join.md +++ b/_field-types/supported-field-types/join.md @@ -61,7 +61,7 @@ PUT testindex1/_doc/1 ``` {% include copy-curl.html %} -When indexing child documents, you have to specify the `routing` query parameter because parent and child documents in the same relation have to be indexed on the same shard. Each child document refers to its parent's ID in the `parent` field. +When indexing child documents, you need to specify the `routing` query parameter because parent and child documents in the same parent/child hierarchy must be indexed on the same shard. For more information, see [Routing]({{site.url}}{{site.baseurl}}/field-types/metadata-fields/routing/). Each child document refers to its parent's ID in the `parent` field. Index two child documents, one for each parent: diff --git a/_query-dsl/geo-and-xy/geo-bounding-box.md b/_query-dsl/geo-and-xy/geo-bounding-box.md index 1112a4278e..66fcc224d6 100644 --- a/_query-dsl/geo-and-xy/geo-bounding-box.md +++ b/_query-dsl/geo-and-xy/geo-bounding-box.md @@ -173,11 +173,11 @@ GET testindex1/_search ``` {% include copy-curl.html %} -## Request fields +## Parameters -Geo-bounding box queries accept the following fields. +Geo-bounding box queries accept the following parameters. -Field | Data type | Description +Parameter | Data type | Description :--- | :--- | :--- `_name` | String | The name of the filter. Optional. `validation_method` | String | The validation method. Valid values are `IGNORE_MALFORMED` (accept geopoints with invalid coordinates), `COERCE` (try to coerce coordinates to valid values), and `STRICT` (return an error when coordinates are invalid). Default is `STRICT`. diff --git a/_query-dsl/geo-and-xy/geodistance.md b/_query-dsl/geo-and-xy/geodistance.md index b272cad81e..3eef58bc69 100644 --- a/_query-dsl/geo-and-xy/geodistance.md +++ b/_query-dsl/geo-and-xy/geodistance.md @@ -103,11 +103,11 @@ The response contains the matching document: } ``` -## Request fields +## Parameters -Geodistance queries accept the following fields. +Geodistance queries accept the following parameters. -Field | Data type | Description +Parameter | Data type | Description :--- | :--- | :--- `_name` | String | The name of the filter. Optional. `distance` | String | The distance within which to match the points. This distance is the radius of a circle centered at the specified point. For supported distance units, see [Distance units]({{site.url}}{{site.baseurl}}/api-reference/common-parameters/#distance-units). Required. diff --git a/_query-dsl/geo-and-xy/geopolygon.md b/_query-dsl/geo-and-xy/geopolygon.md index 980a0c5a63..810e48f2b7 100644 --- a/_query-dsl/geo-and-xy/geopolygon.md +++ b/_query-dsl/geo-and-xy/geopolygon.md @@ -161,11 +161,11 @@ However, if you specify the vertices in the following order: The response returns no results. -## Request fields +## Parameters -Geopolygon queries accept the following fields. +Geopolygon queries accept the following parameters. -Field | Data type | Description +Parameter | Data type | Description :--- | :--- | :--- `_name` | String | The name of the filter. Optional. `validation_method` | String | The validation method. Valid values are `IGNORE_MALFORMED` (accept geopoints with invalid coordinates), `COERCE` (try to coerce coordinates to valid values), and `STRICT` (return an error when coordinates are invalid). Optional. Default is `STRICT`. diff --git a/_query-dsl/geo-and-xy/geoshape.md b/_query-dsl/geo-and-xy/geoshape.md index 8acc691c3a..5b144b06d6 100644 --- a/_query-dsl/geo-and-xy/geoshape.md +++ b/_query-dsl/geo-and-xy/geoshape.md @@ -721,10 +721,10 @@ The response returns document 1: Note that when you indexed the geopoints, you specified their coordinates in `"latitude, longitude"` format. When you search for matching documents, the coordinate array is in `[longitude, latitude]` format. Thus, document 1 is returned in the results but document 2 is not. -## Request fields +## Parameters -Geoshape queries accept the following fields. +Geoshape queries accept the following parameters. -Field | Data type | Description +Parameter | Data type | Description :--- | :--- | :--- `ignore_unmapped` | Boolean | Specifies whether to ignore an unmapped field. If set to `true`, then the query does not return any documents that contain an unmapped field. If set to `false`, then an exception is thrown when the field is unmapped. Optional. Default is `false`. \ No newline at end of file diff --git a/_query-dsl/joining/has-child.md b/_query-dsl/joining/has-child.md new file mode 100644 index 0000000000..c1cc7a5423 --- /dev/null +++ b/_query-dsl/joining/has-child.md @@ -0,0 +1,259 @@ +--- +layout: default +title: Has child +parent: Joining queries +nav_order: 10 +--- + +# Has child query + +The `has_child` query returns parent documents whose child documents match a specific query. You can establish parent-child relationships between documents in the same index by using a [join]({{site.url}}{{site.baseurl}}/field-types/supported-field-types/join/) field type. + +The `has_child` query is slower than other queries because of the join operation it performs. Performance decreases as the number of matching child documents pointing to different parent documents increases. Each `has_child` query in your search may significantly impact query performance. If you prioritize speed, avoid using this query or limit its usage as much as possible. +{: .warning} + +## Example + +Before you can run a `has_child` query, your index must contain a [join]({{site.url}}{{site.baseurl}}/field-types/supported-field-types/join/) field in order to establish parent-child relationships. The index mapping request uses the following format: + +```json +PUT /example_index +{ + "mappings": { + "properties": { + "relationship_field": { + "type": "join", + "relations": { + "parent_doc": "child_doc" + } + } + } + } +} +``` +{% include copy-curl.html %} + +In this example, you'll configure an index that contains documents representing products and their brands. + +First, create the index and establish the parent-child relationship between `brand` and `product`: + +```json +PUT testindex1 +{ + "mappings": { + "properties": { + "product_to_brand": { + "type": "join", + "relations": { + "brand": "product" + } + } + } + } +} +``` +{% include copy-curl.html %} + +Index two parent (brand) documents: + +```json +PUT testindex1/_doc/1 +{ + "name": "Luxury brand", + "product_to_brand" : "brand" +} +``` +{% include copy-curl.html %} + +```json +PUT testindex1/_doc/2 +{ + "name": "Economy brand", + "product_to_brand" : "brand" +} +``` +{% include copy-curl.html %} + +Index three child (product) documents: + +```json +PUT testindex1/_doc/3?routing=1 +{ + "name": "Mechanical watch", + "sales_count": 150, + "product_to_brand": { + "name": "product", + "parent": "1" + } +} +``` +{% include copy-curl.html %} + +```json +PUT testindex1/_doc/4?routing=2 +{ + "name": "Electronic watch", + "sales_count": 300, + "product_to_brand": { + "name": "product", + "parent": "2" + } +} +``` +{% include copy-curl.html %} + +```json +PUT testindex1/_doc/5?routing=2 +{ + "name": "Digital watch", + "sales_count": 100, + "product_to_brand": { + "name": "product", + "parent": "2" + } +} +``` +{% include copy-curl.html %} + +To search for the parent of a child, use a `has_child` query. The following query returns parent documents (brands) that make watches: + +```json +GET testindex1/_search +{ + "query" : { + "has_child": { + "type":"product", + "query": { + "match" : { + "name": "watch" + } + } + } + } +} +``` +{% include copy-curl.html %} + +The response returns both brands: + +```json +{ + "took": 15, + "timed_out": false, + "_shards": { + "total": 1, + "successful": 1, + "skipped": 0, + "failed": 0 + }, + "hits": { + "total": { + "value": 2, + "relation": "eq" + }, + "max_score": 1, + "hits": [ + { + "_index": "testindex1", + "_id": "1", + "_score": 1, + "_source": { + "name": "Luxury brand", + "product_to_brand": "brand" + } + }, + { + "_index": "testindex1", + "_id": "2", + "_score": 1, + "_source": { + "name": "Economy brand", + "product_to_brand": "brand" + } + } + ] + } +} +``` + +## Parameters + +The following table lists all top-level parameters supported by `has_child` queries. + +| Parameter | Required/Optional | Description | +|:---|:---|:---| +| `type` | Required | Specifies the name of the child relationship as defined in the `join` field mapping. | +| `query` | Required | The query to run on child documents. If a child document matches the query, the parent document is returned. | +| `ignore_unmapped` | Optional | Indicates whether to ignore unmapped `type` fields and not return documents instead of throwing an error. You can provide this parameter when querying multiple indexes, some of which may not contain the `type` field. Default is `false`. | +| `max_children` | Optional | The maximum number of matching child documents for a parent document. If exceeded, the parent document is excluded from the search results. | +| `min_children` | Optional | The minimum number of matching child documents required for a parent document to be included in the results. If not met, the parent is excluded. Default is `1`.| +| `score_mode` | Optional | Defines how scores of matching child documents influence the parent document's score. Valid values are:
- `none`: Ignores the relevance scores of child documents and assigns a score of `0` to the parent document.
- `avg`: Uses the average relevance score of all matching child documents.
- `max`: Assigns the highest relevance score from the matching child documents to the parent.
- `min`: Assigns the lowest relevance score from the matching child documents to the parent.
- `sum`: Sums the relevance scores of all matching child documents.
Default is `none`. | + + +## Sorting limitations + +The `has_child` query does not support [sorting results]({{site.url}}{{site.baseurl}}/search-plugins/searching-data/sort/) using standard sorting options. If you need to sort parent documents by fields in their child documents, you can use a [`function_score` query]({{site.url}}{{site.baseurl}}/query-dsl/compound/function-score/) and sort by the parent document's score. + +In the preceding example, you can sort parent documents (brands) based on the `sales_count` of their child products. This query multiplies the score by the `sales_count` field of the child documents and assigns the highest relevance score from the matching child documents to the parent: + +```json +GET testindex1/_search +{ + "query": { + "has_child": { + "type": "product", + "query": { + "function_score": { + "script_score": { + "script": "_score * doc['sales_count'].value" + } + } + }, + "score_mode": "max" + } + } +} +``` +{% include copy-curl.html %} + +The response contains the brands sorted by the highest child `sales_count`: + +```json +{ + "took": 6, + "timed_out": false, + "_shards": { + "total": 1, + "successful": 1, + "skipped": 0, + "failed": 0 + }, + "hits": { + "total": { + "value": 2, + "relation": "eq" + }, + "max_score": 300, + "hits": [ + { + "_index": "testindex1", + "_id": "2", + "_score": 300, + "_source": { + "name": "Economy brand", + "product_to_brand": "brand" + } + }, + { + "_index": "testindex1", + "_id": "1", + "_score": 150, + "_source": { + "name": "Luxury brand", + "product_to_brand": "brand" + } + } + ] + } +} +``` \ No newline at end of file diff --git a/_query-dsl/joining/index.md b/_query-dsl/joining/index.md index 20f48c0b16..4ed46b3e17 100644 --- a/_query-dsl/joining/index.md +++ b/_query-dsl/joining/index.md @@ -3,6 +3,7 @@ layout: default title: Joining queries has_children: true nav_order: 55 +has_toc: false --- # Joining queries @@ -10,9 +11,9 @@ nav_order: 55 OpenSearch is a distributed system in which data is spread across multiple nodes. Thus, running a SQL-like JOIN operation in OpenSearch is resource intensive. As an alternative, OpenSearch provides the following queries that perform join operations and are optimized for scaling across multiple nodes: - `nested` queries: Act as wrappers for other queries to search [nested]({{site.url}}{{site.baseurl}}/field-types/supported-field-types/nested/) fields. The nested field objects are searched as though they were indexed as separate documents. -- `has_child` queries: Search for parent documents whose child documents match the query. +- [`has_child`]({{site.url}}{{site.baseurl}}/query-dsl/joining/has-child/) queries: Search for parent documents whose child documents match the query. - `has_parent` queries: Search for child documents whose parent documents match the query. -- `parent_id` queries: A [join]({{site.url}}{{site.baseurl}}/field-types/supported-field-types/nested/) field type establishes a parent/child relationship between documents in the same index. `parent_id` queries search for child documents that are joined to a specific parent document. +- `parent_id` queries: A [join]({{site.url}}{{site.baseurl}}/field-types/supported-field-types/join/) field type establishes a parent/child relationship between documents in the same index. `parent_id` queries search for child documents that are joined to a specific parent document. If [`search.allow_expensive_queries`]({{site.url}}{{site.baseurl}}/query-dsl/index/#expensive-queries) is set to `false`, then joining queries are not executed. {: .important} \ No newline at end of file From 1cbacf936e724a06b40675dfd6243ab80a51bc9e Mon Sep 17 00:00:00 2001 From: leanneeliatra <131779422+leanneeliatra@users.noreply.github.com> Date: Tue, 24 Sep 2024 16:33:31 +0100 Subject: [PATCH 085/111] Collapsing search results new page added to documentation (#7919) * adding documentation for collapsing search results Signed-off-by: leanne.laceybyrne@eliatra.com * Clarifying collapsing of search results Signed-off-by: leanne.laceybyrne@eliatra.com * updating collapsing example as per request Signed-off-by: leanne.laceybyrne@eliatra.com * updates as per reviewdog Signed-off-by: leanne.laceybyrne@eliatra.com * updates as per review dog Signed-off-by: leanne.laceybyrne@eliatra.com * remove unneeded space Signed-off-by: leanne.laceybyrne@eliatra.com * Update _search-plugins/collapse-search.md Co-authored-by: Melissa Vagi Signed-off-by: leanneeliatra <131779422+leanneeliatra@users.noreply.github.com> * Update _search-plugins/collapse-search.md Co-authored-by: Melissa Vagi Signed-off-by: leanneeliatra <131779422+leanneeliatra@users.noreply.github.com> * Update _search-plugins/collapse-search.md Co-authored-by: Melissa Vagi Signed-off-by: leanneeliatra <131779422+leanneeliatra@users.noreply.github.com> * Update _search-plugins/collapse-search.md Co-authored-by: Melissa Vagi Signed-off-by: leanneeliatra <131779422+leanneeliatra@users.noreply.github.com> * Apply suggestions from code review Co-authored-by: Melissa Vagi Signed-off-by: leanneeliatra <131779422+leanneeliatra@users.noreply.github.com> * Update _search-plugins/collapse-search.md Co-authored-by: Nathan Bower Signed-off-by: Melissa Vagi * Update _search-plugins/collapse-search.md Co-authored-by: Nathan Bower Signed-off-by: Melissa Vagi * Update _search-plugins/collapse-search.md Co-authored-by: Nathan Bower Signed-off-by: Melissa Vagi * Update _search-plugins/collapse-search.md Co-authored-by: Nathan Bower Signed-off-by: Melissa Vagi * Update _search-plugins/collapse-search.md Co-authored-by: Nathan Bower Signed-off-by: Melissa Vagi * Update _search-plugins/collapse-search.md Co-authored-by: Nathan Bower Signed-off-by: Melissa Vagi * Update _search-plugins/collapse-search.md Co-authored-by: Nathan Bower Signed-off-by: Melissa Vagi * Update _search-plugins/collapse-search.md Co-authored-by: Nathan Bower Signed-off-by: Melissa Vagi * Update _search-plugins/collapse-search.md Co-authored-by: Nathan Bower Signed-off-by: Melissa Vagi * Update _search-plugins/collapse-search.md Co-authored-by: Nathan Bower Signed-off-by: Melissa Vagi * Update _search-plugins/collapse-search.md Co-authored-by: Nathan Bower Signed-off-by: Melissa Vagi * Update _search-plugins/collapse-search.md Co-authored-by: Nathan Bower Signed-off-by: Melissa Vagi * Update _search-plugins/collapse-search.md Co-authored-by: Melissa Vagi Signed-off-by: leanneeliatra <131779422+leanneeliatra@users.noreply.github.com> * Review suggestions addressed Signed-off-by: leanne.laceybyrne@eliatra.com * update to language Signed-off-by: leanne.laceybyrne@eliatra.com * update to language from review Signed-off-by: leanne.laceybyrne@eliatra.com --------- Signed-off-by: leanne.laceybyrne@eliatra.com Signed-off-by: leanneeliatra <131779422+leanneeliatra@users.noreply.github.com> Signed-off-by: Melissa Vagi Co-authored-by: Melissa Vagi Co-authored-by: Nathan Bower --- _search-plugins/collapse-search.md | 231 +++++++++++++++++++++++++++++ 1 file changed, 231 insertions(+) create mode 100644 _search-plugins/collapse-search.md diff --git a/_search-plugins/collapse-search.md b/_search-plugins/collapse-search.md new file mode 100644 index 0000000000..ec7e57515a --- /dev/null +++ b/_search-plugins/collapse-search.md @@ -0,0 +1,231 @@ +--- +layout: default +title: Collapse search results +nav_order: 3 +--- + +# Collapse search results + +The `collapse` parameter groups search results by a particular field value. This returns only the top document within each group, which helps reduce redundancy by eliminating duplicates. + +The `collapse` parameter requires the field being collapsed to be of either a `keyword` or a `numeric` type. + +--- + +## Collapsing search results + +To populate an index with data, define the index mappings and an `item` field indexed as a `keyword`. The following example request shows you how to define index mappings, populate an index, and then search it. + +#### Define index mappings + +```json +PUT /bakery-items +{ + "mappings": { + "properties": { + "item": { + "type": "keyword" + }, + "category": { + "type": "keyword" + }, + "price": { + "type": "float" + }, + "baked_date": { + "type": "date" + } + } + } +} +``` + +#### Populate an index + +```json +POST /bakery-items/_bulk +{ "index": {} } +{ "item": "Chocolate Cake", "category": "cakes", "price": 15, "baked_date": "2023-07-01T00:00:00Z" } +{ "index": {} } +{ "item": "Chocolate Cake", "category": "cakes", "price": 18, "baked_date": "2023-07-04T00:00:00Z" } +{ "index": {} } +{ "item": "Vanilla Cake", "category": "cakes", "price": 12, "baked_date": "2023-07-02T00:00:00Z" } +``` + +#### Search the index, returning all results + +```json +GET /bakery-items/_search +{ + "query": { + "match": { + "category": "cakes" + } + }, + "sort": ["price"] +} +``` + +This query returns the uncollapsed search results, showing all documents, including both entries for "Chocolate Cake". + +#### Search the index and collapse the results + +To group search results by the `item` field and sort them by `price`, you can use the following query: + +**Collapsed `item` field search results** + +```json +GET /bakery-items/_search +{ + "query": { + "match": { + "category": "cakes" + } + }, + "collapse": { + "field": "item" + }, + "sort": ["price"] +} +``` + +**Response** + +```json +{ + "took": 3, + "timed_out": false, + "_shards": { + "total": 1, + "successful": 1, + "skipped": 0, + "failed": 0 + }, + "hits": { + "total": { + "value": 4, + "relation": "eq" + }, + "max_score": null, + "hits": [ + { + "_index": "bakery-items", + "_id": "mISga5EB2HLDXHkv9kAr", + "_score": null, + "_source": { + "item": "Vanilla Cake", + "category": "cakes", + "price": 12, + "baked_date": "2023-07-02T00:00:00Z", + "baker": "Baker A" + }, + "fields": { + "item": [ + "Vanilla Cake" + ] + }, + "sort": [ + 12 + ] + }, + { + "_index": "bakery-items", + "_id": "loSga5EB2HLDXHkv9kAr", + "_score": null, + "_source": { + "item": "Chocolate Cake", + "category": "cakes", + "price": 15, + "baked_date": "2023-07-01T00:00:00Z", + "baker": "Baker A" + }, + "fields": { + "item": [ + "Chocolate Cake" + ] + }, + "sort": [ + 15 + ] + } + ] + } +} +``` + +The collapsed search results will show only one "Chocolate Cake" entry, demonstrating how the `collapse` parameter reduces redundancy. + +The `collapse` parameter affects only the top search results and does not change any aggregation results. The total number of hits shown in the response reflects all matching documents before the parameter is applied, including duplicates. However, the response doesn't indicate the exact number of unique groups formed by the operation. + +--- + +## Expanding collapsed results + +You can expand each collapsed top hit with the `inner_hits` property. + +The following example request applies `inner_hits` to retrieve the lowest-priced and most recent item, for each type of cake: + +```json +GET /bakery-items/_search +{ + "query": { + "match": { + "category": "cakes" + } + }, + "collapse": { + "field": "item", + "inner_hits": [ + { + "name": "cheapest_items", + "size": 1, + "sort": ["price"] + }, + { + "name": "newest_items", + "size": 1, + "sort": [{ "baked_date": "desc" }] + } + ] + }, + "sort": ["price"] +} + +``` + +### Multiple inner hits for each collapsed hit + +To obtain several groups of inner hits for each collapsed result, you can set different criteria for each group. For example, lets request the three most recent items for every bakery item: + +```json +GET /bakery-items/_search +{ + "query": { + "match": { + "category": "cakes" + } + }, + "collapse": { + "field": "item", + "inner_hits": [ + { + "name": "cheapest_items", + "size": 1, + "sort": ["price"] + }, + { + "name": "newest_items", + "size": 3, + "sort": [{ "baked_date": "desc" }] + } + ] + }, + "sort": ["price"] +} + + +``` +This query searches for documents in the `cakes` category and groups the search results by the `item_name` field. For each `item_name`, it retrieves the top three lowest-priced items and the top three most recent items, sorted by `baked_date` in descending order. + +You can expand the groups by sending an additional query for each inner hit request corresponding to each collapsed hit in the response. This can significantly slow down the process if there are too many groups or inner hit requests. The `max_concurrent_group_searches` request parameter can be used to control the maximum number of concurrent searches allowed in this phase. The default is based on the number of data nodes and the default search thread pool size. + From 53b650f481834e30eafa7cc4b80a7a523dbc562a Mon Sep 17 00:00:00 2001 From: leanneeliatra <131779422+leanneeliatra@users.noreply.github.com> Date: Tue, 24 Sep 2024 16:48:36 +0100 Subject: [PATCH 086/111] [DOC] Normalizers (#8192) * updating index page with normalisation Signed-off-by: leanne.laceybyrne@eliatra.com * Update _analyzers/index.md Signed-off-by: Melissa Vagi * Update _analyzers/index.md Signed-off-by: Melissa Vagi * Apply suggestions from code review Co-authored-by: Nathan Bower Signed-off-by: leanneeliatra <131779422+leanneeliatra@users.noreply.github.com> --------- Signed-off-by: leanne.laceybyrne@eliatra.com Signed-off-by: Melissa Vagi Signed-off-by: leanneeliatra <131779422+leanneeliatra@users.noreply.github.com> Co-authored-by: Melissa Vagi Co-authored-by: Nathan Bower --- _analyzers/index.md | 24 +++++++++++++++++++++++- 1 file changed, 23 insertions(+), 1 deletion(-) diff --git a/_analyzers/index.md b/_analyzers/index.md index 95f97ec8ce..9b999e5c3d 100644 --- a/_analyzers/index.md +++ b/_analyzers/index.md @@ -170,6 +170,28 @@ The response provides information about the analyzers for each field: } ``` +## Normalizers +Tokenization divides text into individual terms, but it does not address variations in token forms. Normalization resolves these issues by converting tokens into a standard format. This ensures that similar terms are matched appropriately, even if they are not identical. + +### Normalization techniques + +The following normalization techniques can help address variations in token forms: +1. **Case normalization**: Converts all tokens to lowercase to ensure case-insensitive matching. For example, "Hello" is normalized to "hello". + +2. **Stemming**: Reduces words to their root form. For instance, "cars" is stemmed to "car", and "running" is normalized to "run". + +3. **Synonym handling:** Treats synonyms as equivalent. For example, "jogging" and "running" can be indexed under a common term, such as "run". + +### Normalization + +A search for `Hello` will match documents containing `hello` because of case normalization. + +A search for `cars` will also match documents containing `car` because of stemming. + +A query for `running` can retrieve documents containing `jogging` using synonym handling. + +Normalization ensures that searches are not limited to exact term matches, allowing for more relevant results. For instance, a search for `Cars running` can be normalized to match `car run`. + ## Next steps -- Learn more about specifying [index analyzers]({{site.url}}{{site.baseurl}}/analyzers/index-analyzers/) and [search analyzers]({{site.url}}{{site.baseurl}}/analyzers/search-analyzers/). \ No newline at end of file +- Learn more about specifying [index analyzers]({{site.url}}{{site.baseurl}}/analyzers/index-analyzers/) and [search analyzers]({{site.url}}{{site.baseurl}}/analyzers/search-analyzers/). From 85a9fce3c078788367eb2db0f72ec6cbfa7a6854 Mon Sep 17 00:00:00 2001 From: Drew Miranda <107503402+drewmiranda-gl@users.noreply.github.com> Date: Tue, 24 Sep 2024 16:21:01 -0400 Subject: [PATCH 087/111] Disambiguate statement "value defaults to a fixed percentage" (#8363) This page documents that "In the case of a dedicated search node where the node exclusively has the search role, this value defaults to a fixed percentage of available storage." However, the document does not provide specifics about what this fixed percentage is. From what I can surmise reading the code, and from doing real world testing, this value is set to 80% of the available space on the volume where the file cache lives. Signed-off-by: Drew Miranda <107503402+drewmiranda-gl@users.noreply.github.com> --- .../availability-and-recovery/snapshots/searchable_snapshot.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/_tuning-your-cluster/availability-and-recovery/snapshots/searchable_snapshot.md b/_tuning-your-cluster/availability-and-recovery/snapshots/searchable_snapshot.md index b9e35b2697..d13955f3f0 100644 --- a/_tuning-your-cluster/availability-and-recovery/snapshots/searchable_snapshot.md +++ b/_tuning-your-cluster/availability-and-recovery/snapshots/searchable_snapshot.md @@ -18,7 +18,7 @@ The searchable snapshot feature incorporates techniques like caching frequently To configure the searchable snapshots feature, create a node in your `opensearch.yml file` and define the node role as `search`. Optionally, you can also configure the `cache.size` property for the node. -A `search` node reserves storage for the cache to perform searchable snapshot queries. In the case of a dedicated search node where the node exclusively has the `search` role, this value defaults to a fixed percentage of available storage. In other cases, the value needs to be configured by the user using the `node.search.cache.size` setting. +A `search` node reserves storage for the cache to perform searchable snapshot queries. In the case of a dedicated search node where the node exclusively has the `search` role, this value defaults to a fixed percentage (80%) of available storage. In other cases, the value needs to be configured by the user using the `node.search.cache.size` setting. Parameter | Type | Description :--- | :--- | :--- From 7780da9103a3cdf979639b520b7c2220f90ada9a Mon Sep 17 00:00:00 2001 From: Finn Date: Tue, 24 Sep 2024 14:15:07 -0700 Subject: [PATCH 088/111] opensearch-benchmark execute_test -> execute-test (#8366) Signed-off-by: Finn Carroll --- _benchmark/quickstart.md | 4 ++-- _benchmark/user-guide/creating-custom-workloads.md | 4 ++-- _benchmark/user-guide/distributed-load.md | 2 +- 3 files changed, 5 insertions(+), 5 deletions(-) diff --git a/_benchmark/quickstart.md b/_benchmark/quickstart.md index a6bcd59819..9ab6d25c77 100644 --- a/_benchmark/quickstart.md +++ b/_benchmark/quickstart.md @@ -116,7 +116,7 @@ You can now run your first benchmark. The following benchmark uses the [percolat Benchmarks are run using the [`execute-test`]({{site.url}}{{site.baseurl}}/benchmark/commands/execute-test/) command with the following command flags: -For additional `execute_test` command flags, see the [execute-test]({{site.url}}{{site.baseurl}}/benchmark/commands/execute-test/) reference. Some commonly used options are `--workload-params`, `--exclude-tasks`, and `--include-tasks`. +For additional `execute-test` command flags, see the [execute-test]({{site.url}}{{site.baseurl}}/benchmark/commands/execute-test/) reference. Some commonly used options are `--workload-params`, `--exclude-tasks`, and `--include-tasks`. {: .tip} * `--pipeline=benchmark-only` : Informs OSB that users wants to provide their own OpenSearch cluster. @@ -136,7 +136,7 @@ opensearch-benchmark execute-test --pipeline=benchmark-only --workload=percolato ``` {% include copy.html %} -When the `execute_test` command runs, all tasks and operations in the `percolator` workload run sequentially. +When the `execute-test` command runs, all tasks and operations in the `percolator` workload run sequentially. ### Validating the test diff --git a/_benchmark/user-guide/creating-custom-workloads.md b/_benchmark/user-guide/creating-custom-workloads.md index ee0dca1ce9..6effa9a265 100644 --- a/_benchmark/user-guide/creating-custom-workloads.md +++ b/_benchmark/user-guide/creating-custom-workloads.md @@ -263,7 +263,7 @@ opensearch-benchmark list workloads --workload-path= Use the `opensearch-benchmark execute-test` command to invoke your new workload and run a benchmark test against your OpenSearch cluster, as shown in the following example. Replace `--workload-path` with the path to your custom workload, `--target-host` with the `host:port` pairs for your cluster, and `--client-options` with any authorization options required to access the cluster. ``` -opensearch-benchmark execute_test \ +opensearch-benchmark execute-test \ --pipeline="benchmark-only" \ --workload-path="" \ --target-host="" \ @@ -289,7 +289,7 @@ head -n 1000 -documents.json > -documents-1k.json Then, run `opensearch-benchmark execute-test` with the option `--test-mode`. Test mode runs a quick version of the workload test. ``` -opensearch-benchmark execute_test \ +opensearch-benchmark execute-test \ --pipeline="benchmark-only" \ --workload-path="" \ --target-host="" \ diff --git a/_benchmark/user-guide/distributed-load.md b/_benchmark/user-guide/distributed-load.md index 60fc98500f..f16de29f88 100644 --- a/_benchmark/user-guide/distributed-load.md +++ b/_benchmark/user-guide/distributed-load.md @@ -64,7 +64,7 @@ With OpenSearch Benchmark running on all three nodes and the worker nodes set to On **Node 1**, run a benchmark test with the `worker-ips` set to the IP addresses for your worker nodes, as shown in the following example: ``` -opensearch-benchmark execute_test --pipeline=benchmark-only --workload=eventdata --worker-ips=198.52.100.0,198.53.100.0 --target-hosts= --client-options= --kill-running-processes +opensearch-benchmark execute-test --pipeline=benchmark-only --workload=eventdata --worker-ips=198.52.100.0,198.53.100.0 --target-hosts= --client-options= --kill-running-processes ``` After the test completes, the logs generated by the test appear on your worker nodes. From 4782c3eb0aad63373ae1b0f3dddac6df8007c2fe Mon Sep 17 00:00:00 2001 From: Ryan Bogan Date: Tue, 24 Sep 2024 14:30:18 -0700 Subject: [PATCH 089/111] Fix the k-NN train API command for disk based vector search (#8369) Signed-off-by: Ryan Bogan --- _search-plugins/knn/disk-based-vector-search.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/_search-plugins/knn/disk-based-vector-search.md b/_search-plugins/knn/disk-based-vector-search.md index 82da30a0ac..dfb9262db5 100644 --- a/_search-plugins/knn/disk-based-vector-search.md +++ b/_search-plugins/knn/disk-based-vector-search.md @@ -167,7 +167,7 @@ GET my-vector-index/_search For [model-based indexes]({{site.url}}{{site.baseurl}}/search-plugins/knn/approximate-knn/#building-a-k-nn-index-from-a-model), you can specify the `on_disk` parameter in the training request in the same way that you would specify it during index creation. By default, `on_disk` mode will use the [Faiss IVF method]({{site.url}}{{site.baseurl}}/search-plugins/knn/knn-index/#supported-faiss-methods) and a compression level of `32x`. To run the training API, send the following request: ```json -POST /_plugins/_knn/models/_train/test-model +POST /_plugins/_knn/models/test-model/_train { "training_index": "train-index-name", "training_field": "train-field-name", From d8c3a7491084585f2e085011ec0609ad11b022d8 Mon Sep 17 00:00:00 2001 From: Naarcha-AWS <97990722+Naarcha-AWS@users.noreply.github.com> Date: Wed, 25 Sep 2024 10:17:50 -0500 Subject: [PATCH 090/111] Update target-throughput.md (#8248) * Update target-throughput.md Signed-off-by: Naarcha-AWS <97990722+Naarcha-AWS@users.noreply.github.com> * Update target-throughput.md Signed-off-by: Naarcha-AWS <97990722+Naarcha-AWS@users.noreply.github.com> * Update _benchmark/user-guide/target-throughput.md Co-authored-by: kolchfa-aws <105444904+kolchfa-aws@users.noreply.github.com> Signed-off-by: Naarcha-AWS <97990722+Naarcha-AWS@users.noreply.github.com> * Apply suggestions from code review Signed-off-by: Naarcha-AWS <97990722+Naarcha-AWS@users.noreply.github.com> --------- Signed-off-by: Naarcha-AWS <97990722+Naarcha-AWS@users.noreply.github.com> Co-authored-by: kolchfa-aws <105444904+kolchfa-aws@users.noreply.github.com> --- _benchmark/user-guide/target-throughput.md | 8 +++++--- 1 file changed, 5 insertions(+), 3 deletions(-) diff --git a/_benchmark/user-guide/target-throughput.md b/_benchmark/user-guide/target-throughput.md index 63832de595..1be0a8be39 100644 --- a/_benchmark/user-guide/target-throughput.md +++ b/_benchmark/user-guide/target-throughput.md @@ -16,13 +16,15 @@ OpenSearch Benchmark has two testing modes, both of which are related to through ## Benchmarking mode -When you do not specify a `target-throughput`, OpenSearch Benchmark latency tests are performed in *benchmarking mode*. In this mode, the OpenSearch client sends requests to the OpenSearch cluster as fast as possible. After the cluster receives a response from the previous request, OpenSearch Benchmark immediately sends the next request to the OpenSearch client. In this testing mode, latency is identical to service time. +When `target-throughput` is set to `0`, OpenSearch Benchmark latency tests are performed in *benchmarking mode*. In this mode, the OpenSearch client sends requests to the OpenSearch cluster as fast as possible. After the cluster receives a response from the previous request, OpenSearch Benchmark immediately sends the next request to the OpenSearch client. In this testing mode, latency is identical to service time. + +OpenSearch Benchmark issues one request at a time per a single client. The number of clients is set by the `search-clients` setting in the workload parameters. ## Throughput-throttled mode -**Throughput** measures the rate at which OpenSearch Benchmark issues requests, assuming that responses will be returned instantaneously. However, users can set a `target-throughput`, which is a common workload parameter that can be set for each test and is measured in operations per second. +If the `target-throughput` is not set to `0`, then OpenSearch Benchmark issues the next request in accordance with the `target-throughput`, assuming that responses are returned instantaneously. -OpenSearch Benchmark issues one request at a time for a single-client thread, which is specified as `search-clients` in the workload parameters. If `target-throughput` is set to `0`, then OpenSearch Benchmark issues a request immediately after it receives the response from the previous request. If the `target-throughput` is not set to `0`, then OpenSearch Benchmark issues the next request in accordance with the `target-throughput`, assuming that responses are returned instantaneously. +**Throughput** measures the rate at which OpenSearch Benchmark issues requests, assuming that responses are returned instantaneously. To configure the request rate, you can set the `target-throughput` workload parameter to the desired number of operations per second for each test. When you want to simulate the type of traffic you might encounter when deploying a production cluster, set the `target-throughput` in your benchmark test to match the number of requests you estimate that the production cluster might receive. The following examples show how the `target-throughput` setting affects the latency measurement. From 292e172d70d119f400aad5e940557c21b75191cd Mon Sep 17 00:00:00 2001 From: Samir Patel Date: Wed, 25 Sep 2024 10:49:04 -0500 Subject: [PATCH 091/111] Update document-level-security.md (#8375) * Update document-level-security.md Max size for document-level security configuration is 1024 KB. Signed-off-by: Samir Patel * Apply suggestions from code review Signed-off-by: Naarcha-AWS <97990722+Naarcha-AWS@users.noreply.github.com> --------- Signed-off-by: Samir Patel Signed-off-by: Naarcha-AWS <97990722+Naarcha-AWS@users.noreply.github.com> Co-authored-by: Naarcha-AWS <97990722+Naarcha-AWS@users.noreply.github.com> --- _security/access-control/document-level-security.md | 2 ++ 1 file changed, 2 insertions(+) diff --git a/_security/access-control/document-level-security.md b/_security/access-control/document-level-security.md index 352fe06a61..b17b60e147 100644 --- a/_security/access-control/document-level-security.md +++ b/_security/access-control/document-level-security.md @@ -13,6 +13,8 @@ Document-level security lets you restrict a role to a subset of documents in an ![Document- and field-level security screen in OpenSearch Dashboards]({{site.url}}{{site.baseurl}}/images/security-dls.png) +The maximum size for the document-level security configuration is 1024 KB (1,048,404 characters). +{: .warning} ## Simple roles From d2e4e37de57ffd81f6cf6a6a773e45d888592452 Mon Sep 17 00:00:00 2001 From: gaobinlong Date: Thu, 26 Sep 2024 20:29:55 +0800 Subject: [PATCH 092/111] Remove data streams support for the Create or update alias API (#8380) Signed-off-by: gaobinlong --- _api-reference/index-apis/update-alias.md | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/_api-reference/index-apis/update-alias.md b/_api-reference/index-apis/update-alias.md index f32d34025e..c069703bf3 100644 --- a/_api-reference/index-apis/update-alias.md +++ b/_api-reference/index-apis/update-alias.md @@ -10,7 +10,7 @@ nav_order: 5 **Introduced 1.0** {: .label .label-purple } -The Create or Update Alias API adds a data stream or index to an alias or updates the settings for an existing alias. For more alias API operations, see [Index aliases]({{site.url}}{{site.baseurl}}/opensearch/index-alias/). +The Create or Update Alias API adds one or more indexes to an alias or updates the settings for an existing alias. For more alias API operations, see [Index aliases]({{site.url}}{{site.baseurl}}/opensearch/index-alias/). The Create or Update Alias API is distinct from the [Alias API]({{site.url}}{{site.baseurl}}/opensearch/rest-api/alias/), which supports the addition and removal of aliases and the removal of alias indexes. In contrast, the following API only supports adding or updating an alias without updating the index itself. Each API also uses different request body parameters. {: .note} @@ -35,7 +35,7 @@ PUT /_alias | Parameter | Type | Description | :--- | :--- | :--- -| `target` | String | A comma-delimited list of data streams and indexes. Wildcard expressions (`*`) are supported. To target all data streams and indexes in a cluster, use `_all` or `*`. Optional. | +| `target` | String | A comma-delimited list of indexes. Wildcard expressions (`*`) are supported. To target all indexes in a cluster, use `_all` or `*`. Optional. | | `alias-name` | String | The alias name to be created or updated. Optional. | ## Query parameters @@ -53,7 +53,7 @@ In the request body, you can specify the index name, the alias name, and the set Field | Type | Description :--- | :--- | :--- | :--- -`index` | String | A comma-delimited list of data streams or indexes that you want to associate with the alias. If this field is set, it will override the index name specified in the URL path. +`index` | String | A comma-delimited list of indexes that you want to associate with the alias. If this field is set, it will override the index name specified in the URL path. `alias` | String | The name of the alias. If this field is set, it will override the alias name specified in the URL path. `is_write_index` | Boolean | Specifies whether the index should be a write index. An alias can only have one write index at a time. If a write request is submitted to an alias that links to multiple indexes, then OpenSearch runs the request only on the write index. `routing` | String | Assigns a custom value to a shard for specific operations. From c0127fb264b5111b152b602dade61c2fde171a42 Mon Sep 17 00:00:00 2001 From: kolchfa-aws <105444904+kolchfa-aws@users.noreply.github.com> Date: Thu, 26 Sep 2024 10:37:00 -0400 Subject: [PATCH 093/111] Add has parent query (#8365) * Add has parent query Signed-off-by: Fanit Kolchina * Apply suggestions from code review Co-authored-by: Nathan Bower Signed-off-by: kolchfa-aws <105444904+kolchfa-aws@users.noreply.github.com> --------- Signed-off-by: Fanit Kolchina Signed-off-by: kolchfa-aws <105444904+kolchfa-aws@users.noreply.github.com> Co-authored-by: Nathan Bower --- _field-types/supported-field-types/join.md | 5 + _field-types/supported-field-types/nested.md | 5 + _query-dsl/joining/has-child.md | 141 +++++++- _query-dsl/joining/has-parent.md | 358 +++++++++++++++++++ _query-dsl/joining/index.md | 4 +- _search-plugins/searching-data/inner-hits.md | 6 +- 6 files changed, 516 insertions(+), 3 deletions(-) create mode 100644 _query-dsl/joining/has-parent.md diff --git a/_field-types/supported-field-types/join.md b/_field-types/supported-field-types/join.md index 1c5b0d1322..009471a784 100644 --- a/_field-types/supported-field-types/join.md +++ b/_field-types/supported-field-types/join.md @@ -327,3 +327,8 @@ PUT testindex1 - Multiple parents are not supported. - You can add a child document to an existing document only if the existing document is already marked as a parent. - You can add a new relation to an existing join field. + +## Next steps + +- Learn about [joining queries]({{site.url}}{{site.baseurl}}/query-dsl/joining/) on join fields. +- Learn more about [retrieving inner hits]({{site.url}}{{site.baseurl}}/search-plugins/searching-data/inner-hits/). \ No newline at end of file diff --git a/_field-types/supported-field-types/nested.md b/_field-types/supported-field-types/nested.md index f8dfca2ff8..4db270c1dc 100644 --- a/_field-types/supported-field-types/nested.md +++ b/_field-types/supported-field-types/nested.md @@ -314,3 +314,8 @@ Parameter | Description `include_in_parent` | A Boolean value that specifies whether all fields in the child nested object should also be added to the parent document in flattened form. Default is `false`. `include_in_root` | A Boolean value that specifies whether all fields in the child nested object should also be added to the root document in flattened form. Default is `false`. `properties` | Fields of this object, which can be of any supported type. New properties can be dynamically added to this object if `dynamic` is set to `true`. + +## Next steps + +- Learn about [joining queries]({{site.url}}{{site.baseurl}}/query-dsl/joining/) on nested fields. +- Learn about [retrieving inner hits]({{site.url}}{{site.baseurl}}/search-plugins/searching-data/inner-hits/). \ No newline at end of file diff --git a/_query-dsl/joining/has-child.md b/_query-dsl/joining/has-child.md index c1cc7a5423..a6b67ea8ca 100644 --- a/_query-dsl/joining/has-child.md +++ b/_query-dsl/joining/has-child.md @@ -176,6 +176,140 @@ The response returns both brands: } ``` +## Retrieving inner hits + +To return child documents that matched the query, provide the `inner_hits` parameter: + +```json +GET testindex1/_search +{ + "query" : { + "has_child": { + "type":"product", + "query": { + "match" : { + "name": "watch" + } + }, + "inner_hits": {} + } + } +} +``` +{% include copy-curl.html %} + +The response contains child documents in the `inner_hits` field: + +```json +{ + "took": 52, + "timed_out": false, + "_shards": { + "total": 1, + "successful": 1, + "skipped": 0, + "failed": 0 + }, + "hits": { + "total": { + "value": 2, + "relation": "eq" + }, + "max_score": 1, + "hits": [ + { + "_index": "testindex1", + "_id": "1", + "_score": 1, + "_source": { + "name": "Luxury brand", + "product_to_brand": "brand" + }, + "inner_hits": { + "product": { + "hits": { + "total": { + "value": 1, + "relation": "eq" + }, + "max_score": 0.53899646, + "hits": [ + { + "_index": "testindex1", + "_id": "3", + "_score": 0.53899646, + "_routing": "1", + "_source": { + "name": "Mechanical watch", + "sales_count": 150, + "product_to_brand": { + "name": "product", + "parent": "1" + } + } + } + ] + } + } + } + }, + { + "_index": "testindex1", + "_id": "2", + "_score": 1, + "_source": { + "name": "Economy brand", + "product_to_brand": "brand" + }, + "inner_hits": { + "product": { + "hits": { + "total": { + "value": 2, + "relation": "eq" + }, + "max_score": 0.53899646, + "hits": [ + { + "_index": "testindex1", + "_id": "4", + "_score": 0.53899646, + "_routing": "2", + "_source": { + "name": "Electronic watch", + "sales_count": 300, + "product_to_brand": { + "name": "product", + "parent": "2" + } + } + }, + { + "_index": "testindex1", + "_id": "5", + "_score": 0.53899646, + "_routing": "2", + "_source": { + "name": "Digital watch", + "sales_count": 100, + "product_to_brand": { + "name": "product", + "parent": "2" + } + } + } + ] + } + } + } + } + ] + } +} +``` + +For more information about retrieving inner hits, see [Inner hits]({{site.url}}{{site.baseurl}}/search-plugins/searching-data/inner-hits/). + ## Parameters The following table lists all top-level parameters supported by `has_child` queries. @@ -188,6 +322,7 @@ The following table lists all top-level parameters supported by `has_child` quer | `max_children` | Optional | The maximum number of matching child documents for a parent document. If exceeded, the parent document is excluded from the search results. | | `min_children` | Optional | The minimum number of matching child documents required for a parent document to be included in the results. If not met, the parent is excluded. Default is `1`.| | `score_mode` | Optional | Defines how scores of matching child documents influence the parent document's score. Valid values are:
- `none`: Ignores the relevance scores of child documents and assigns a score of `0` to the parent document.
- `avg`: Uses the average relevance score of all matching child documents.
- `max`: Assigns the highest relevance score from the matching child documents to the parent.
- `min`: Assigns the lowest relevance score from the matching child documents to the parent.
- `sum`: Sums the relevance scores of all matching child documents.
Default is `none`. | +| `inner_hits` | Optional | If provided, returns the underlying hits (child documents) that matched the query. | ## Sorting limitations @@ -256,4 +391,8 @@ The response contains the brands sorted by the highest child `sales_count`: ] } } -``` \ No newline at end of file +``` + +## Next steps + +- Learn more about [retrieving inner hits]({{site.url}}{{site.baseurl}}/search-plugins/searching-data/inner-hits/). \ No newline at end of file diff --git a/_query-dsl/joining/has-parent.md b/_query-dsl/joining/has-parent.md new file mode 100644 index 0000000000..2232009fe7 --- /dev/null +++ b/_query-dsl/joining/has-parent.md @@ -0,0 +1,358 @@ +--- +layout: default +title: Has parent +parent: Joining queries +nav_order: 20 +--- + +# Has parent query + +The `has_parent` query returns child documents whose parent documents match a specific query. You can establish parent-child relationships between documents in the same index by using a [join]({{site.url}}{{site.baseurl}}/field-types/supported-field-types/join/) field type. + +The `has_parent` query is slower than other queries because of the join operation it performs. Performance decreases as the number of matching parent documents increases. Each `has_parent` query in your search may significantly impact query performance. If you prioritize speed, avoid using this query or limit its usage as much as possible. +{: .warning} + +## Example + +Before you can run a `has_parent` query, your index must contain a [join]({{site.url}}{{site.baseurl}}/field-types/supported-field-types/join/) field in order to establish parent-child relationships. The index mapping request uses the following format: + +```json +PUT /example_index +{ + "mappings": { + "properties": { + "relationship_field": { + "type": "join", + "relations": { + "parent_doc": "child_doc" + } + } + } + } +} +``` +{% include copy-curl.html %} + +For this example, first configure an index that contains documents representing products and their brands as described in the [`has_child` query example]({{site.url}}{{site.baseurl}}/query-dsl/joining/has-child/). + +To search for the child of a parent, use a `has_parent` query. The following query returns child documents (products) made by the brand matching the query `economy`: + +```json +GET testindex1/_search +{ + "query" : { + "has_parent": { + "parent_type":"brand", + "query": { + "match" : { + "name": "economy" + } + } + } + } +} +``` +{% include copy-curl.html %} + +The response returns all products made by the brand: + +```json +{ + "took": 11, + "timed_out": false, + "_shards": { + "total": 1, + "successful": 1, + "skipped": 0, + "failed": 0 + }, + "hits": { + "total": { + "value": 2, + "relation": "eq" + }, + "max_score": 1, + "hits": [ + { + "_index": "testindex1", + "_id": "4", + "_score": 1, + "_routing": "2", + "_source": { + "name": "Electronic watch", + "sales_count": 300, + "product_to_brand": { + "name": "product", + "parent": "2" + } + } + }, + { + "_index": "testindex1", + "_id": "5", + "_score": 1, + "_routing": "2", + "_source": { + "name": "Digital watch", + "sales_count": 100, + "product_to_brand": { + "name": "product", + "parent": "2" + } + } + } + ] + } +} +``` + +## Retrieving inner hits + +To return parent documents that matched the query, provide the `inner_hits` parameter: + +```json +GET testindex1/_search +{ + "query" : { + "has_parent": { + "parent_type":"brand", + "query": { + "match" : { + "name": "economy" + } + }, + "inner_hits": {} + } + } +} +``` +{% include copy-curl.html %} + +The response contains parent documents in the `inner_hits` field: + +```json +{ + "took": 11, + "timed_out": false, + "_shards": { + "total": 1, + "successful": 1, + "skipped": 0, + "failed": 0 + }, + "hits": { + "total": { + "value": 2, + "relation": "eq" + }, + "max_score": 1, + "hits": [ + { + "_index": "testindex1", + "_id": "4", + "_score": 1, + "_routing": "2", + "_source": { + "name": "Electronic watch", + "sales_count": 300, + "product_to_brand": { + "name": "product", + "parent": "2" + } + }, + "inner_hits": { + "brand": { + "hits": { + "total": { + "value": 1, + "relation": "eq" + }, + "max_score": 1.3862942, + "hits": [ + { + "_index": "testindex1", + "_id": "2", + "_score": 1.3862942, + "_source": { + "name": "Economy brand", + "product_to_brand": "brand" + } + } + ] + } + } + } + }, + { + "_index": "testindex1", + "_id": "5", + "_score": 1, + "_routing": "2", + "_source": { + "name": "Digital watch", + "sales_count": 100, + "product_to_brand": { + "name": "product", + "parent": "2" + } + }, + "inner_hits": { + "brand": { + "hits": { + "total": { + "value": 1, + "relation": "eq" + }, + "max_score": 1.3862942, + "hits": [ + { + "_index": "testindex1", + "_id": "2", + "_score": 1.3862942, + "_source": { + "name": "Economy brand", + "product_to_brand": "brand" + } + } + ] + } + } + } + } + ] + } +} +``` + +For more information about retrieving inner hits, see [Inner hits]({{site.url}}{{site.baseurl}}/search-plugins/searching-data/inner-hits/). + +## Parameters + +The following table lists all top-level parameters supported by `has_parent` queries. + +| Parameter | Required/Optional | Description | +|:---|:---|:---| +| `parent_type` | Required | Specifies the name of the parent relationship as defined in the `join` field mapping. | +| `query` | Required | The query to run on parent documents. If a parent document matches the query, the child document is returned. | +| `ignore_unmapped` | Optional | Indicates whether to ignore unmapped `parent_type` fields and not return documents instead of throwing an error. You can provide this parameter when querying multiple indexes, some of which may not contain the `parent_type` field. Default is `false`. | +| `score` | Optional | Indicates whether the relevance score of a matching parent document is aggregated into its child documents. If `false`, then the relevance score of the parent document is ignored, and each child document is assigned a relevance score equal to the query's `boost`, which defaults to `1`. If `true`, then the relevance score of the matching parent document is aggregated into the relevance scores of its child documents. Default is `false`. | +| `inner_hits` | Optional | If provided, returns the underlying hits (parent documents) that matched the query. | + + +## Sorting limitations + +The `has_parent` query does not support [sorting results]({{site.url}}{{site.baseurl}}/search-plugins/searching-data/sort/) using standard sorting options. If you need to sort child documents by fields in their parent documents, you can use a [`function_score` query]({{site.url}}{{site.baseurl}}/query-dsl/compound/function-score/) and sort by the child document's score. + +For the preceding example, first add a `customer_satisfaction` field by which you'll sort the child documents belonging to the parent (brand) documents: + +```json +PUT testindex1/_doc/1 +{ + "name": "Luxury watch brand", + "product_to_brand" : "brand", + "customer_satisfaction": 4.5 +} +``` +{% include copy-curl.html %} + +```json +PUT testindex1/_doc/2 +{ + "name": "Economy watch brand", + "product_to_brand" : "brand", + "customer_satisfaction": 3.9 +} +``` +{% include copy-curl.html %} + +Now you can sort child documents (products) based on the `customer_satisfaction` field of their parent brands. This query multiplies the score by the `customer_satisfaction` field of the parent documents: + +```json +GET testindex1/_search +{ + "query": { + "has_parent": { + "parent_type": "brand", + "score": true, + "query": { + "function_score": { + "script_score": { + "script": "_score * doc['customer_satisfaction'].value" + } + } + } + } + } +} +``` +{% include copy-curl.html %} + +The response contains the products, sorted by the highest parent `customer_satisfaction`: + +```json +{ + "took": 11, + "timed_out": false, + "_shards": { + "total": 1, + "successful": 1, + "skipped": 0, + "failed": 0 + }, + "hits": { + "total": { + "value": 3, + "relation": "eq" + }, + "max_score": 4.5, + "hits": [ + { + "_index": "testindex1", + "_id": "3", + "_score": 4.5, + "_routing": "1", + "_source": { + "name": "Mechanical watch", + "sales_count": 150, + "product_to_brand": { + "name": "product", + "parent": "1" + } + } + }, + { + "_index": "testindex1", + "_id": "4", + "_score": 3.9, + "_routing": "2", + "_source": { + "name": "Electronic watch", + "sales_count": 300, + "product_to_brand": { + "name": "product", + "parent": "2" + } + } + }, + { + "_index": "testindex1", + "_id": "5", + "_score": 3.9, + "_routing": "2", + "_source": { + "name": "Digital watch", + "sales_count": 100, + "product_to_brand": { + "name": "product", + "parent": "2" + } + } + } + ] + } +} +``` + +## Next steps + +- Learn more about [retrieving inner hits]({{site.url}}{{site.baseurl}}/search-plugins/searching-data/inner-hits/). \ No newline at end of file diff --git a/_query-dsl/joining/index.md b/_query-dsl/joining/index.md index 4ed46b3e17..74ad7f1ea1 100644 --- a/_query-dsl/joining/index.md +++ b/_query-dsl/joining/index.md @@ -4,6 +4,8 @@ title: Joining queries has_children: true nav_order: 55 has_toc: false +redirect_from: + - /query-dsl/joining/ --- # Joining queries @@ -12,7 +14,7 @@ OpenSearch is a distributed system in which data is spread across multiple nodes - `nested` queries: Act as wrappers for other queries to search [nested]({{site.url}}{{site.baseurl}}/field-types/supported-field-types/nested/) fields. The nested field objects are searched as though they were indexed as separate documents. - [`has_child`]({{site.url}}{{site.baseurl}}/query-dsl/joining/has-child/) queries: Search for parent documents whose child documents match the query. -- `has_parent` queries: Search for child documents whose parent documents match the query. +- [`has_parent`]({{site.url}}{{site.baseurl}}/query-dsl/joining/has-parent/) queries: Search for child documents whose parent documents match the query. - `parent_id` queries: A [join]({{site.url}}{{site.baseurl}}/field-types/supported-field-types/join/) field type establishes a parent/child relationship between documents in the same index. `parent_id` queries search for child documents that are joined to a specific parent document. If [`search.allow_expensive_queries`]({{site.url}}{{site.baseurl}}/query-dsl/index/#expensive-queries) is set to `false`, then joining queries are not executed. diff --git a/_search-plugins/searching-data/inner-hits.md b/_search-plugins/searching-data/inner-hits.md index 395e9e748a..5eda9498b5 100644 --- a/_search-plugins/searching-data/inner-hits.md +++ b/_search-plugins/searching-data/inner-hits.md @@ -806,4 +806,8 @@ The following is the expected result: Using `inner_hits` provides contextual relevance by showing exactly which nested or child documents match the query criteria. This is crucial for applications in which the relevance of results depends on a specific part of the document that matches the query. - Example use case: In a customer support system, you have tickets as parent documents and comments or updates as nested or child documents. You can determine which specific comment matches the search in order to better understand the context of the ticket search. \ No newline at end of file + Example use case: In a customer support system, you have tickets as parent documents and comments or updates as nested or child documents. You can determine which specific comment matches the search in order to better understand the context of the ticket search. + +## Next steps + +- Learn about [joining queries]({{site.url}}{{site.baseurl}}/query-dsl/joining/) on [nested]({{site.url}}{{site.baseurl}}/field-types/supported-field-types/nested/) or [join]({{site.url}}{{site.baseurl}}/field-types/supported-field-types/join/) fields. \ No newline at end of file From a6a339614d0967bf92b30f76724edee2ac9337d7 Mon Sep 17 00:00:00 2001 From: Owais Kazi Date: Thu, 26 Sep 2024 08:44:36 -0700 Subject: [PATCH 094/111] Adds documentation for providing search pipeline id in the search/msearch request (#8372) * Adds documentation for providing search pipeline id in the request Signed-off-by: Owais * Doc review Signed-off-by: Fanit Kolchina --------- Signed-off-by: Owais Signed-off-by: Fanit Kolchina Co-authored-by: Fanit Kolchina --- .../search-pipelines/using-search-pipeline.md | 35 +++++++++++++++++-- 1 file changed, 33 insertions(+), 2 deletions(-) diff --git a/_search-plugins/search-pipelines/using-search-pipeline.md b/_search-plugins/search-pipelines/using-search-pipeline.md index ecb988ad11..b6dbbdc5d0 100644 --- a/_search-plugins/search-pipelines/using-search-pipeline.md +++ b/_search-plugins/search-pipelines/using-search-pipeline.md @@ -17,14 +17,45 @@ You can use a search pipeline in the following ways: ## Specifying an existing search pipeline for a request -After you [create a search pipeline]({{site.url}}{{site.baseurl}}/search-plugins/search-pipelines/creating-search-pipeline/), you can use the pipeline with a query by specifying the pipeline name in the `search_pipeline` query parameter: +After you [create a search pipeline]({{site.url}}{{site.baseurl}}/search-plugins/search-pipelines/creating-search-pipeline/), you can use the pipeline with a query in the following ways. For a complete example of using a search pipeline with a `filter_query` processor, see [`filter_query` processor example]({{site.url}}{{site.baseurl}}/search-plugins/search-pipelines/filter-query-processor#example). + +### Specifying the pipeline in a query parameter + +You can specify the pipeline name in the `search_pipeline` query parameter as follows: ```json GET /my_index/_search?search_pipeline=my_pipeline ``` {% include copy-curl.html %} -For a complete example of using a search pipeline with a `filter_query` processor, see [`filter_query` processor example]({{site.url}}{{site.baseurl}}/search-plugins/search-pipelines/filter-query-processor#example). +### Specifying the pipeline in the request body + +You can provide a search pipeline ID in the search request body as follows: + +```json +GET /my-index/_search +{ + "query": { + "match_all": {} + }, + "from": 0, + "size": 10, + "search_pipeline": "my_pipeline" +} +``` +{% include copy-curl.html %} + +For multi-search, you can provide a search pipeline ID in the search request body as follows: + +```json +GET /_msearch +{ "index": "test"} +{ "query": { "match_all": {} }, "from": 0, "size": 10, "search_pipeline": "my_pipeline"} +{ "index": "test-1", "search_type": "dfs_query_then_fetch"} +{ "query": { "match_all": {} }, "search_pipeline": "my_pipeline1" } + +``` +{% include copy-curl.html %} ## Using a temporary search pipeline for a request From 6c5c04c2bed59f975b8a7b41d5aba82aeb30da73 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?=C4=90=E1=BB=97=20Tr=E1=BB=8Dng=20H=E1=BA=A3i?= <41283691+hainenber@users.noreply.github.com> Date: Thu, 26 Sep 2024 23:26:42 +0700 Subject: [PATCH 095/111] Containerize process for local development setup (#8220) MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit * Containerize process for local development setup Signed-off-by: hainenber * Revert list numbering to "1" for Jekyll convention Signed-off-by: hainenber * Update CONTRIBUTING.md Co-authored-by: kolchfa-aws <105444904+kolchfa-aws@users.noreply.github.com> Signed-off-by: Đỗ Trọng Hải <41283691+hainenber@users.noreply.github.com> * Update CONTRIBUTING.md Co-authored-by: kolchfa-aws <105444904+kolchfa-aws@users.noreply.github.com> Signed-off-by: Đỗ Trọng Hải <41283691+hainenber@users.noreply.github.com> * Update CONTRIBUTING.md Co-authored-by: kolchfa-aws <105444904+kolchfa-aws@users.noreply.github.com> Signed-off-by: Đỗ Trọng Hải <41283691+hainenber@users.noreply.github.com> --------- Signed-off-by: hainenber Signed-off-by: Đỗ Trọng Hải <41283691+hainenber@users.noreply.github.com> Co-authored-by: kolchfa-aws <105444904+kolchfa-aws@users.noreply.github.com> --- .gitignore | 1 + CONTRIBUTING.md | 10 ++++++++++ build.sh | 8 +++++++- docker-compose.dev.yml | 14 ++++++++++++++ 4 files changed, 32 insertions(+), 1 deletion(-) create mode 100644 docker-compose.dev.yml diff --git a/.gitignore b/.gitignore index da3cf9d144..92f01c5fca 100644 --- a/.gitignore +++ b/.gitignore @@ -7,3 +7,4 @@ Gemfile.lock *.iml .jekyll-cache .project +vendor/bundle diff --git a/CONTRIBUTING.md b/CONTRIBUTING.md index 7afa9d7596..8ab3c2bd4f 100644 --- a/CONTRIBUTING.md +++ b/CONTRIBUTING.md @@ -78,6 +78,8 @@ Follow these steps to set up your local copy of the repository: 1. Navigate to your cloned repository. +##### Building using locally installed packages + 1. Install [Ruby](https://www.ruby-lang.org/en/) if you don't already have it. We recommend [RVM](https://rvm.io/), but you can use any method you prefer: ``` @@ -98,6 +100,14 @@ Follow these steps to set up your local copy of the repository: bundle install ``` +##### Building using containerization + +Assuming you have `docker-compose` installed, run the following command: + + ``` + docker compose -f docker-compose.dev.yml up + ``` + #### Troubleshooting Try the following troubleshooting steps if you encounter an error when trying to build the documentation website: diff --git a/build.sh b/build.sh index 060bbfa666..85ef617931 100755 --- a/build.sh +++ b/build.sh @@ -1,3 +1,9 @@ #!/usr/bin/env bash -JEKYLL_LINK_CHECKER=internal bundle exec jekyll serve --host localhost --port 4000 --incremental --livereload --open-url --trace +host="localhost" + +if [[ "$DOCKER_BUILD" == "true" ]]; then + host="0.0.0.0" +fi + +JEKYLL_LINK_CHECKER=internal bundle exec jekyll serve --host ${host} --port 4000 --incremental --livereload --open-url --trace diff --git a/docker-compose.dev.yml b/docker-compose.dev.yml new file mode 100644 index 0000000000..04dd007db9 --- /dev/null +++ b/docker-compose.dev.yml @@ -0,0 +1,14 @@ +version: "3" + +services: + doc_builder: + image: ruby:3.2.4 + volumes: + - .:/app + working_dir: /app + ports: + - "4000:4000" + command: bash -c "bundler install && bash build.sh" + environment: + BUNDLE_PATH: /app/vendor/bundle # Avoid installing gems globally. + DOCKER_BUILD: true # Signify build.sh to bind to 0.0.0.0 for effective doc access from host. From 9c54d2cd7c1f7aa16d43fb18e8c60333c2f175f9 Mon Sep 17 00:00:00 2001 From: AWSHurneyt Date: Thu, 26 Sep 2024 14:44:06 -0700 Subject: [PATCH 096/111] Security analytics plugin - added more details for S3 connection setup (#8374) * Added more details to the s3 connection setup. Signed-off-by: AWSHurneyt * Adjusted wording for cross-account bucket download. Signed-off-by: AWSHurneyt * Created subsection for cross-account bucket download. Signed-off-by: AWSHurneyt * Adjusted wording based on suggestions. Signed-off-by: AWSHurneyt * Update getting-started.md Signed-off-by: Naarcha-AWS <97990722+Naarcha-AWS@users.noreply.github.com> * Apply suggestions from code review Signed-off-by: Naarcha-AWS <97990722+Naarcha-AWS@users.noreply.github.com> * Apply suggestions from code review Signed-off-by: Naarcha-AWS <97990722+Naarcha-AWS@users.noreply.github.com> * Apply suggestions from code review Signed-off-by: Naarcha-AWS <97990722+Naarcha-AWS@users.noreply.github.com> * Apply suggestions from code review Signed-off-by: Naarcha-AWS <97990722+Naarcha-AWS@users.noreply.github.com> * Apply suggestions from code review Signed-off-by: Naarcha-AWS <97990722+Naarcha-AWS@users.noreply.github.com> * Apply suggestions from code review Signed-off-by: Naarcha-AWS <97990722+Naarcha-AWS@users.noreply.github.com> * Apply suggestions from code review Signed-off-by: Naarcha-AWS <97990722+Naarcha-AWS@users.noreply.github.com> --------- Signed-off-by: AWSHurneyt Signed-off-by: Naarcha-AWS <97990722+Naarcha-AWS@users.noreply.github.com> Co-authored-by: Naarcha-AWS <97990722+Naarcha-AWS@users.noreply.github.com> --- .../threat-intelligence/getting-started.md | 55 ++++++++++++++++++- 1 file changed, 52 insertions(+), 3 deletions(-) diff --git a/_security-analytics/threat-intelligence/getting-started.md b/_security-analytics/threat-intelligence/getting-started.md index 366bc2674c..b26063bed0 100644 --- a/_security-analytics/threat-intelligence/getting-started.md +++ b/_security-analytics/threat-intelligence/getting-started.md @@ -50,15 +50,64 @@ Local files uploaded as the threat intelligence source must use the following sp When using the `S3_SOURCE` as a remote store, the following connection information must be provided: -- **IAM Role ARN**: The Amazon Resource Name (ARN) for an AWS Identity and Access Management (IAM) role. -- **S3 bucket directory**: The name of the Amazon Simple Storage Service (Amazon S3) bucket in which the `STIX2` file is stored. -- **Specify a directory or file**: The object key or directory path for the `STIX2` file in the S3 bucket. +- **IAM Role ARN**: The Amazon Resource Name (ARN) for an AWS Identity and Access Management (IAM) role. When using the AWS OpenSearch Service, the role ARN needs to be in the same account as the OpenSearch domain. For more information about adding a new role for the AWS OpenSearch Service, see [Add service ARN](#add-aws-opensearch-service-arn). +- **S3 bucket directory**: The name of the Amazon Simple Storage Service (Amazon S3) bucket in which the `STIX2` file is stored. To access an S3 bucket in a different AWS account, see the [Cross-account S3 bucket connection](#cross-account-s3-bucket-connection) section for more details. +- **Specify a file**: The object key for the `STIX2` file in the S3 bucket. - **Region**: The AWS Region for the S3 bucket. You can also set the **Download schedule**, which determines to where OpenSearch downloads an updated `STIX2` file from the connected S3 bucket. The default interval is once a day. Only daily intervals are supported. Alternatively, you can check the **Download on demand** option, which prevents new data from the bucket from being automatically downloaded. +#### Add AWS OpenSearch Service ARN + +If you're using the AWS OpenSearch Service, create a new ARN role with a custom trust policy. For instructions on how to create the role, see [Creating a role for an AWS service](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_create_for-service.html#roles-creatingrole-service-console). + +When creating the role, customize the following settings: + +- Add the following custom trust policy: + + ```bash + { + "Version": "2012-10-17", + "Statement": [ + { + "Effect": "Allow", + "Principal": { + "Service": [ + "opensearchservice.amazonaws.com" + ] + }, + "Action": "sts:AssumeRole" + } + ] + } + ``` + +- On the Permissions policies page, add the `AmazonS3ReadOnlyAccess` permission. + + +#### Cross-account S3 bucket connection + +Because the role ARN needs to be in the same account as the OpenSearch domain, a trust policy needs to be configured that allows the OpenSearch domain to download from S3 buckets from the same account. + +To download from an S3 bucket in another account, the trust policy for that bucket needs to give the role ARN permission to read from the object, as shown in the following example: + +``` +{ + "Version": "2012-10-17", + "Statement": [ + { + "Effect": "Allow", + "Principal": { + "AWS": "arn:aws:iam::123456789012:role/account-1-threat-intel-role" + }, + "Action": "s3:*", + "Resource": "arn:aws:s3:::account-2-threat-intel-bucket/*" + } + ] +} +``` ## Step 2: Set up scanning for your log sources From 55800f6b74a2f32c9f0da0d202fd4c386b512bc9 Mon Sep 17 00:00:00 2001 From: kolchfa-aws <105444904+kolchfa-aws@users.noreply.github.com> Date: Fri, 27 Sep 2024 09:58:41 -0400 Subject: [PATCH 097/111] Add parent ID query (#8384) * Add parent ID query Signed-off-by: Fanit Kolchina * Rearrange index page Signed-off-by: Fanit Kolchina * Apply suggestions from code review Co-authored-by: Nathan Bower Signed-off-by: kolchfa-aws <105444904+kolchfa-aws@users.noreply.github.com> * Unify parent/child wording Signed-off-by: Fanit Kolchina --------- Signed-off-by: Fanit Kolchina Signed-off-by: kolchfa-aws <105444904+kolchfa-aws@users.noreply.github.com> Co-authored-by: Nathan Bower --- _benchmark/reference/workloads/corpora.md | 2 +- .../anatomy-of-a-workload.md | 2 +- _field-types/supported-field-types/index.md | 2 +- _field-types/supported-field-types/join.md | 2 +- .../supported-field-types/object-fields.md | 2 +- _query-dsl/joining/has-child.md | 6 +- _query-dsl/joining/has-parent.md | 4 +- _query-dsl/joining/index.md | 11 +- _query-dsl/joining/nested.md | 347 ++++++++++++++++++ _query-dsl/joining/parent-id.md | 96 +++++ _search-plugins/searching-data/inner-hits.md | 4 +- 11 files changed, 462 insertions(+), 16 deletions(-) create mode 100644 _query-dsl/joining/nested.md create mode 100644 _query-dsl/joining/parent-id.md diff --git a/_benchmark/reference/workloads/corpora.md b/_benchmark/reference/workloads/corpora.md index 0e8d408e9a..f59e2b8a6a 100644 --- a/_benchmark/reference/workloads/corpora.md +++ b/_benchmark/reference/workloads/corpora.md @@ -49,7 +49,7 @@ Each entry in the `documents` array consists of the following options. Parameter | Required | Type | Description :--- | :--- | :--- | :--- `source-file` | Yes | String | The file name containing the corresponding documents for the workload. When using OpenSearch Benchmark locally, documents are contained in a JSON file. When providing a `base_url`, use a compressed file format: `.zip`, `.bz2`, `.gz`, `.tar`, `.tar.gz`, `.tgz`, or `.tar.bz2`. The compressed file must have one JSON file containing the name. -`document-count` | Yes | Integer | The number of documents in the `source-file`, which determines which client indexes correlate to which parts of the document corpus. Each N client receives an Nth of the document corpus. When using a source that contains a document with a parent-child relationship, specify the number of parent documents. +`document-count` | Yes | Integer | The number of documents in the `source-file`, which determines which client indexes correlate to which parts of the document corpus. Each N client receives an Nth of the document corpus. When using a source that contains a document with a parent/child relationship, specify the number of parent documents. `base-url` | No | String | An http(s), Amazon Simple Storage Service (Amazon S3), or Google Cloud Storage URL that points to the root path where OpenSearch Benchmark can obtain the corresponding source file. `source-format` | No | String | Defines the format OpenSearch Benchmark uses to interpret the data file specified in `source-file`. Only `bulk` is supported. `compressed-bytes` | No | Integer | The size, in bytes, of the compressed source file, indicating how much data OpenSearch Benchmark downloads. diff --git a/_benchmark/user-guide/understanding-workloads/anatomy-of-a-workload.md b/_benchmark/user-guide/understanding-workloads/anatomy-of-a-workload.md index 3bf339e4d5..f8e1d90d32 100644 --- a/_benchmark/user-guide/understanding-workloads/anatomy-of-a-workload.md +++ b/_benchmark/user-guide/understanding-workloads/anatomy-of-a-workload.md @@ -98,7 +98,7 @@ To create an index, specify its `name`. To add definitions to your index, use th The `corpora` element requires the name of the index containing the document corpus, for example, `movies`, and a list of parameters that define the document corpora. This list includes the following parameters: - `source-file`: The file name that contains the workload's corresponding documents. When using OpenSearch Benchmark locally, documents are contained in a JSON file. When providing a `base_url`, use a compressed file format: `.zip`, `.bz2`, `.zst`, `.gz`, `.tar`, `.tar.gz`, `.tgz`, or `.tar.bz2`. The compressed file must include one JSON file containing the name. -- `document-count`: The number of documents in the `source-file`, which determines which client indexes correlate to which parts of the document corpus. Each N client is assigned an Nth of the document corpus to ingest into the test cluster. When using a source that contains a document with a parent-child relationship, specify the number of parent documents. +- `document-count`: The number of documents in the `source-file`, which determines which client indexes correlate to which parts of the document corpus. Each N client is assigned an Nth of the document corpus to ingest into the test cluster. When using a source that contains a document with a parent/child relationship, specify the number of parent documents. - `uncompressed-bytes`: The size, in bytes, of the source file after decompression, indicating how much disk space the decompressed source file needs. - `compressed-bytes`: The size, in bytes, of the source file before decompression. This can help you assess the amount of time needed for the cluster to ingest documents. diff --git a/_field-types/supported-field-types/index.md b/_field-types/supported-field-types/index.md index 7c7b7375f9..a43da396d5 100644 --- a/_field-types/supported-field-types/index.md +++ b/_field-types/supported-field-types/index.md @@ -22,7 +22,7 @@ Boolean | [`boolean`]({{site.url}}{{site.baseurl}}/field-types/supported-field-t [Date]({{site.url}}{{site.baseurl}}/field-types/supported-field-types/dates/)| [`date`]({{site.url}}{{site.baseurl}}/field-types/supported-field-types/date/): A date stored in milliseconds.
[`date_nanos`]({{site.url}}{{site.baseurl}}/field-types/supported-field-types/date-nanos/): A date stored in nanoseconds. IP | [`ip`]({{site.url}}{{site.baseurl}}/field-types/supported-field-types/ip/): An IP address in IPv4 or IPv6 format. [Range]({{site.url}}{{site.baseurl}}/field-types/supported-field-types/range/) | A range of values (`integer_range`, `long_range`, `double_range`, `float_range`, `date_range`, `ip_range`). -[Object]({{site.url}}{{site.baseurl}}/field-types/supported-field-types/object-fields/)| [`object`]({{site.url}}{{site.baseurl}}/field-types/supported-field-types/object/): A JSON object.
[`nested`]({{site.url}}{{site.baseurl}}/field-types/supported-field-types/nested/): Used when objects in an array need to be indexed independently as separate documents.
[`flat_object`]({{site.url}}{{site.baseurl}}/field-types/supported-field-types/flat-object/): A JSON object treated as a string.
[`join`]({{site.url}}{{site.baseurl}}/field-types/supported-field-types/join/): Establishes a parent-child relationship between documents in the same index. +[Object]({{site.url}}{{site.baseurl}}/field-types/supported-field-types/object-fields/)| [`object`]({{site.url}}{{site.baseurl}}/field-types/supported-field-types/object/): A JSON object.
[`nested`]({{site.url}}{{site.baseurl}}/field-types/supported-field-types/nested/): Used when objects in an array need to be indexed independently as separate documents.
[`flat_object`]({{site.url}}{{site.baseurl}}/field-types/supported-field-types/flat-object/): A JSON object treated as a string.
[`join`]({{site.url}}{{site.baseurl}}/field-types/supported-field-types/join/): Establishes a parent/child relationship between documents in the same index. [String]({{site.url}}{{site.baseurl}}/field-types/supported-field-types/string/)|[`keyword`]({{site.url}}{{site.baseurl}}/field-types/supported-field-types/keyword/): Contains a string that is not analyzed.
[`text`]({{site.url}}{{site.baseurl}}/field-types/supported-field-types/text/): Contains a string that is analyzed.
[`match_only_text`]({{site.url}}{{site.baseurl}}/field-types/supported-field-types/match-only-text/): A space-optimized version of a `text` field.
[`token_count`]({{site.url}}{{site.baseurl}}/field-types/supported-field-types/token-count/): Stores the number of analyzed tokens in a string.
[`wildcard`]({{site.url}}{{site.baseurl}}/field-types/supported-field-types/wildcard/): A variation of `keyword` with efficient substring and regular expression matching. [Autocomplete]({{site.url}}{{site.baseurl}}/field-types/supported-field-types/autocomplete/) |[`completion`]({{site.url}}{{site.baseurl}}/field-types/supported-field-types/completion/): Provides autocomplete functionality through a completion suggester.
[`search_as_you_type`]({{site.url}}{{site.baseurl}}/field-types/supported-field-types/search-as-you-type/): Provides search-as-you-type functionality using both prefix and infix completion. [Geographic]({{site.url}}{{site.baseurl}}/field-types/supported-field-types/geographic/)| [`geo_point`]({{site.url}}{{site.baseurl}}/field-types/supported-field-types/geo-point/): A geographic point.
[`geo_shape`]({{site.url}}{{site.baseurl}}/field-types/supported-field-types/geo-shape/): A geographic shape. diff --git a/_field-types/supported-field-types/join.md b/_field-types/supported-field-types/join.md index 009471a784..fd808a65e7 100644 --- a/_field-types/supported-field-types/join.md +++ b/_field-types/supported-field-types/join.md @@ -18,7 +18,7 @@ A join field type establishes a parent/child relationship between documents in t ## Example -Create a mapping to establish a parent-child relationship between products and their brands: +Create a mapping to establish a parent/child relationship between products and their brands: ```json PUT testindex1 diff --git a/_field-types/supported-field-types/object-fields.md b/_field-types/supported-field-types/object-fields.md index 429c5b94c7..e683e70f0d 100644 --- a/_field-types/supported-field-types/object-fields.md +++ b/_field-types/supported-field-types/object-fields.md @@ -19,5 +19,5 @@ Field data type | Description [`object`]({{site.url}}{{site.baseurl}}/field-types/object/) | A JSON object. [`nested`]({{site.url}}{{site.baseurl}}/field-types/nested/) | Used when objects in an array need to be indexed independently as separate documents. [`flat_object`]({{site.url}}{{site.baseurl}}/field-types/flat-object/) | A JSON object treated as a string. -[`join`]({{site.url}}{{site.baseurl}}/field-types/join/) | Establishes a parent-child relationship between documents in the same index. +[`join`]({{site.url}}{{site.baseurl}}/field-types/join/) | Establishes a parent/child relationship between documents in the same index. diff --git a/_query-dsl/joining/has-child.md b/_query-dsl/joining/has-child.md index a6b67ea8ca..c7da5bf7a9 100644 --- a/_query-dsl/joining/has-child.md +++ b/_query-dsl/joining/has-child.md @@ -7,14 +7,14 @@ nav_order: 10 # Has child query -The `has_child` query returns parent documents whose child documents match a specific query. You can establish parent-child relationships between documents in the same index by using a [join]({{site.url}}{{site.baseurl}}/field-types/supported-field-types/join/) field type. +The `has_child` query returns parent documents whose child documents match a specific query. You can establish parent/child relationships between documents in the same index by using a [join]({{site.url}}{{site.baseurl}}/field-types/supported-field-types/join/) field type. The `has_child` query is slower than other queries because of the join operation it performs. Performance decreases as the number of matching child documents pointing to different parent documents increases. Each `has_child` query in your search may significantly impact query performance. If you prioritize speed, avoid using this query or limit its usage as much as possible. {: .warning} ## Example -Before you can run a `has_child` query, your index must contain a [join]({{site.url}}{{site.baseurl}}/field-types/supported-field-types/join/) field in order to establish parent-child relationships. The index mapping request uses the following format: +Before you can run a `has_child` query, your index must contain a [join]({{site.url}}{{site.baseurl}}/field-types/supported-field-types/join/) field in order to establish parent/child relationships. The index mapping request uses the following format: ```json PUT /example_index @@ -35,7 +35,7 @@ PUT /example_index In this example, you'll configure an index that contains documents representing products and their brands. -First, create the index and establish the parent-child relationship between `brand` and `product`: +First, create the index and establish the parent/child relationship between `brand` and `product`: ```json PUT testindex1 diff --git a/_query-dsl/joining/has-parent.md b/_query-dsl/joining/has-parent.md index 2232009fe7..6b293ffff2 100644 --- a/_query-dsl/joining/has-parent.md +++ b/_query-dsl/joining/has-parent.md @@ -7,14 +7,14 @@ nav_order: 20 # Has parent query -The `has_parent` query returns child documents whose parent documents match a specific query. You can establish parent-child relationships between documents in the same index by using a [join]({{site.url}}{{site.baseurl}}/field-types/supported-field-types/join/) field type. +The `has_parent` query returns child documents whose parent documents match a specific query. You can establish parent/child relationships between documents in the same index by using a [join]({{site.url}}{{site.baseurl}}/field-types/supported-field-types/join/) field type. The `has_parent` query is slower than other queries because of the join operation it performs. Performance decreases as the number of matching parent documents increases. Each `has_parent` query in your search may significantly impact query performance. If you prioritize speed, avoid using this query or limit its usage as much as possible. {: .warning} ## Example -Before you can run a `has_parent` query, your index must contain a [join]({{site.url}}{{site.baseurl}}/field-types/supported-field-types/join/) field in order to establish parent-child relationships. The index mapping request uses the following format: +Before you can run a `has_parent` query, your index must contain a [join]({{site.url}}{{site.baseurl}}/field-types/supported-field-types/join/) field in order to establish parent/child relationships. The index mapping request uses the following format: ```json PUT /example_index diff --git a/_query-dsl/joining/index.md b/_query-dsl/joining/index.md index 74ad7f1ea1..f0a0060640 100644 --- a/_query-dsl/joining/index.md +++ b/_query-dsl/joining/index.md @@ -12,10 +12,13 @@ redirect_from: OpenSearch is a distributed system in which data is spread across multiple nodes. Thus, running a SQL-like JOIN operation in OpenSearch is resource intensive. As an alternative, OpenSearch provides the following queries that perform join operations and are optimized for scaling across multiple nodes: -- `nested` queries: Act as wrappers for other queries to search [nested]({{site.url}}{{site.baseurl}}/field-types/supported-field-types/nested/) fields. The nested field objects are searched as though they were indexed as separate documents. -- [`has_child`]({{site.url}}{{site.baseurl}}/query-dsl/joining/has-child/) queries: Search for parent documents whose child documents match the query. -- [`has_parent`]({{site.url}}{{site.baseurl}}/query-dsl/joining/has-parent/) queries: Search for child documents whose parent documents match the query. -- `parent_id` queries: A [join]({{site.url}}{{site.baseurl}}/field-types/supported-field-types/join/) field type establishes a parent/child relationship between documents in the same index. `parent_id` queries search for child documents that are joined to a specific parent document. + +- Queries for searching [nested]({{site.url}}{{site.baseurl}}/field-types/supported-field-types/nested/) fields: + - `nested` queries: Act as wrappers for other queries to search [nested]({{site.url}}{{site.baseurl}}/field-types/supported-field-types/nested/) fields. The nested field objects are searched as though they were indexed as separate documents. +- Queries for searching documents connected by a [join]({{site.url}}{{site.baseurl}}/field-types/supported-field-types/join/) field type, which establishes a parent/child relationship between documents in the same index: + - [`has_child`]({{site.url}}{{site.baseurl}}/query-dsl/joining/has-child/) queries: Search for parent documents whose child documents match the query. + - [`has_parent`]({{site.url}}{{site.baseurl}}/query-dsl/joining/has-parent/) queries: Search for child documents whose parent documents match the query. + - [`parent_id`]({{site.url}}{{site.baseurl}}/query-dsl/joining/parent-id/) queries: Search for child documents that are joined to a specific parent document. If [`search.allow_expensive_queries`]({{site.url}}{{site.baseurl}}/query-dsl/index/#expensive-queries) is set to `false`, then joining queries are not executed. {: .important} \ No newline at end of file diff --git a/_query-dsl/joining/nested.md b/_query-dsl/joining/nested.md new file mode 100644 index 0000000000..431a40ed1a --- /dev/null +++ b/_query-dsl/joining/nested.md @@ -0,0 +1,347 @@ +--- +layout: default +title: Nested +parent: Joining queries +nav_order: 30 +--- + +# Nested query + +The `nested` query acts as a wrapper for other queries to search [nested]({{site.url}}{{site.baseurl}}/field-types/supported-field-types/nested/) fields. The nested field objects are searched as though they were indexed as separate documents. If an object matches the search, the `nested` query returns the parent document at the root level. + +## Example + +Before you can run a `nested` query, your index must contain a [nested]({{site.url}}{{site.baseurl}}/field-types/supported-field-types/nested/) field. + +To configure an example index containing nested fields, send the following request: + +```json +PUT /testindex +{ + "mappings": { + "properties": { + "patient": { + "type": "nested", + "properties": { + "name": { + "type": "text" + }, + "age": { + "type": "integer" + } + } + } + } + } +} +``` +{% include copy-curl.html %} + +Next, index a document into the example index: + +```json +PUT /testindex/_doc/1 +{ + "patient": { + "name": "John Doe", + "age": 56 + } +} +``` +{% include copy-curl.html %} + +To search the nested `patient` field, wrap your query in a `nested` query and provide the `path` to the nested field: + +```json +GET /testindex/_search +{ + "query": { + "nested": { + "path": "patient", + "query": { + "match": { + "patient.name": "John" + } + } + } + } +} +``` +{% include copy-curl.html %} + +The query returns the matching document: + +```json +{ + "took": 3, + "timed_out": false, + "_shards": { + "total": 1, + "successful": 1, + "skipped": 0, + "failed": 0 + }, + "hits": { + "total": { + "value": 1, + "relation": "eq" + }, + "max_score": 0.2876821, + "hits": [ + { + "_index": "testindex", + "_id": "1", + "_score": 0.2876821, + "_source": { + "patient": { + "name": "John Doe", + "age": 56 + } + } + } + ] + } +} +``` + +## Retrieving inner hits + +To return inner hits that matched the query, provide the `inner_hits` parameter: + +```json +GET /testindex/_search +{ + "query": { + "nested": { + "path": "patient", + "query": { + "match": { + "patient.name": "John" + } + }, + "inner_hits": {} + } + } +} +``` +{% include copy-curl.html %} + +The response contains the additional `inner_hits` field. The `_nested` field identifies the specific inner object from which the inner hit originated. It contains the nested hit and the offset relative to its position in the `_source`. Because of sorting and scoring, the position of the hit objects in `inner_hits` often differs from their original location in the nested object. + +By default, the `_source` of the hit objects within `inner_hits` is returned relative to the `_nested` field. In this example, the `_source` within `inner_hits` contains the `name` and `age` fields as opposed to the top-level `_source`, which contains the whole `patient` object: + +```json +{ + "took": 38, + "timed_out": false, + "_shards": { + "total": 1, + "successful": 1, + "skipped": 0, + "failed": 0 + }, + "hits": { + "total": { + "value": 1, + "relation": "eq" + }, + "max_score": 0.2876821, + "hits": [ + { + "_index": "testindex", + "_id": "1", + "_score": 0.2876821, + "_source": { + "patient": { + "name": "John Doe", + "age": 56 + } + }, + "inner_hits": { + "patient": { + "hits": { + "total": { + "value": 1, + "relation": "eq" + }, + "max_score": 0.2876821, + "hits": [ + { + "_index": "testindex", + "_id": "1", + "_nested": { + "field": "patient", + "offset": 0 + }, + "_score": 0.2876821, + "_source": { + "name": "John Doe", + "age": 56 + } + } + ] + } + } + } + } + ] + } +} +``` + +You can disable returning `_source` by configuring the `_source` field in the mappings. For more information, see [Source]({{site.url}}{{site.baseurl}}/field-types/metadata-fields/source/). +{: .tip} + +For more information about retrieving inner hits, see [Inner hits]({{site.url}}{{site.baseurl}}/search-plugins/searching-data/inner-hits/). + +## Multi-level nested queries + +You can search documents that have nested objects inside other nested objects using multi-level nested queries. In this example, you'll query multiple layers of nested fields by specifying a nested query for each level of the hierarchy. + +First, create an index with multi-level nested fields: + +```json +PUT /patients +{ + "mappings": { + "properties": { + "patient": { + "type": "nested", + "properties": { + "name": { + "type": "text" + }, + "contacts": { + "type": "nested", + "properties": { + "name": { + "type": "text" + }, + "relationship": { + "type": "text" + }, + "phone": { + "type": "keyword" + } + } + } + } + } + } + } +} +``` +{% include copy-curl.html %} + +Next, index a document into the example index: + +```json +PUT /patients/_doc/1 +{ + "patient": { + "name": "John Doe", + "contacts": [ + { + "name": "Jane Doe", + "relationship": "mother", + "phone": "5551111" + }, + { + "name": "Joe Doe", + "relationship": "father", + "phone": "5552222" + } + ] + } +} +``` +{% include copy-curl.html %} + +To search the nested `patient` field, use a multi-level `nested` query. The following query searches for patients whose contact information includes a person named `Jane` with a relationship of `mother`: + +```json +GET /patients/_search +{ + "query": { + "nested": { + "path": "patient", + "query": { + "nested": { + "path": "patient.contacts", + "query": { + "bool": { + "must": [ + { "match": { "patient.contacts.relationship": "mother" } }, + { "match": { "patient.contacts.name": "Jane" } } + ] + } + } + } + } + } + } +} +``` +{% include copy-curl.html %} + +The query returns the patient who has a contact entry matching these details: + +```json +{ + "took": 14, + "timed_out": false, + "_shards": { + "total": 1, + "successful": 1, + "skipped": 0, + "failed": 0 + }, + "hits": { + "total": { + "value": 1, + "relation": "eq" + }, + "max_score": 1.3862942, + "hits": [ + { + "_index": "patients", + "_id": "1", + "_score": 1.3862942, + "_source": { + "patient": { + "name": "John Doe", + "contacts": [ + { + "name": "Jane Doe", + "relationship": "mother", + "phone": "5551111" + }, + { + "name": "Joe Doe", + "relationship": "father", + "phone": "5552222" + } + ] + } + } + } + ] + } +} +``` + +## Parameters + +The following table lists all top-level parameters supported by `nested` queries. + +| Parameter | Required/Optional | Description | +|:---|:---|:---| +| `path` | Required | Specifies the path to the nested object that you want to search. | +| `query` | Required | The query to run on the nested objects within the specified `path`. If a nested object matches the query, the root parent document is returned. You can search nested fields using dot notation, such as `nested_object.subfield`. Multi-level nesting is supported and automatically detected. Thus, an inner `nested` query within another nested query automatically matches the correct nesting level, instead of the root. | +| `ignore_unmapped` | Optional | Indicates whether to ignore unmapped `path` fields and not return documents instead of throwing an error. You can provide this parameter when querying multiple indexes, some of which may not contain the `path` field. Default is `false`. | +| `score_mode` | Optional | Defines how scores of matching inner documents influence the parent document's score. Valid values are:
- `avg`: Uses the average relevance score of all matching inner documents.
- `max`: Assigns the highest relevance score from the matching inner documents to the parent.
- `min`: Assigns the lowest relevance score from the matching inner documents to the parent.
- `sum`: Sums the relevance scores of all matching inner documents.
- `none`: Ignores the relevance scores of inner documents and assigns a score of `0` to the parent document.
Default is `avg`. | +| `inner_hits` | Optional | If provided, returns the underlying hits that matched the query. | + +## Next steps + +- Learn more about [retrieving inner hits]({{site.url}}{{site.baseurl}}/search-plugins/searching-data/inner-hits/). \ No newline at end of file diff --git a/_query-dsl/joining/parent-id.md b/_query-dsl/joining/parent-id.md new file mode 100644 index 0000000000..cbf86a796e --- /dev/null +++ b/_query-dsl/joining/parent-id.md @@ -0,0 +1,96 @@ +--- +layout: default +title: Parent ID +parent: Joining queries +nav_order: 40 +--- + +# Parent ID query + +The `parent_id` query returns child documents whose parent document has the specified ID. You can establish parent/child relationships between documents in the same index by using a [join]({{site.url}}{{site.baseurl}}/field-types/supported-field-types/join/) field type. + +## Example + +Before you can run a `parent_id` query, your index must contain a [join]({{site.url}}{{site.baseurl}}/field-types/supported-field-types/join/) field in order to establish parent/child relationships. The index mapping request uses the following format: + +```json +PUT /example_index +{ + "mappings": { + "properties": { + "relationship_field": { + "type": "join", + "relations": { + "parent_doc": "child_doc" + } + } + } + } +} +``` +{% include copy-curl.html %} + +For this example, first configure an index that contains documents representing products and their brands as described in the [`has_child` query example]({{site.url}}{{site.baseurl}}/query-dsl/joining/has-child/). + +To search for child documents of a specific parent document, use a `parent_id` query. The following query returns child documents (products) whose parent document has the ID `1`: + +```json +GET testindex1/_search +{ + "query": { + "parent_id": { + "type": "product", + "id": "1" + } + } +} +``` +{% include copy-curl.html %} + +The response returns the child product: + +```json +{ + "took": 57, + "timed_out": false, + "_shards": { + "total": 1, + "successful": 1, + "skipped": 0, + "failed": 0 + }, + "hits": { + "total": { + "value": 1, + "relation": "eq" + }, + "max_score": 0.87546873, + "hits": [ + { + "_index": "testindex1", + "_id": "3", + "_score": 0.87546873, + "_routing": "1", + "_source": { + "name": "Mechanical watch", + "sales_count": 150, + "product_to_brand": { + "name": "product", + "parent": "1" + } + } + } + ] + } +} +``` + +## Parameters + +The following table lists all top-level parameters supported by `parent_id` queries. + +| Parameter | Required/Optional | Description | +|:---|:---|:---| +| `type` | Required | Specifies the name of the child relationship as defined in the `join` field mapping. | +| `id` | Required | The ID of the parent document. The query returns child documents associated with this parent document. | +| `ignore_unmapped` | Optional | Indicates whether to ignore unmapped `type` fields and not return documents instead of throwing an error. You can provide this parameter when querying multiple indexes, some of which may not contain the `type` field. Default is `false`. | \ No newline at end of file diff --git a/_search-plugins/searching-data/inner-hits.md b/_search-plugins/searching-data/inner-hits.md index 5eda9498b5..38fc7a491d 100644 --- a/_search-plugins/searching-data/inner-hits.md +++ b/_search-plugins/searching-data/inner-hits.md @@ -139,8 +139,8 @@ The preceding query searches for nested user objects containing the name John an } } ``` -## Inner hits with parent-child objects -Parent-join relationships allow you to create relationships between documents of different types within the same index. The following example request searches with `inner_hits` using parent-child objects. +## Inner hits with parent/child objects +Parent-join relationships allow you to create relationships between documents of different types within the same index. The following example request searches with `inner_hits` using parent/child objects. 1. Create an index with a parent-join field: From 00d5bb3fe1df02f3e20a14a1652d57f2b3c25df0 Mon Sep 17 00:00:00 2001 From: inpink <108166692+inpink@users.noreply.github.com> Date: Mon, 30 Sep 2024 22:04:10 +0900 Subject: [PATCH 098/111] docs: fix typo in CCR Plugin Auto Follow documentation (#8390) Corrected a typo in the Auto Follow section of the CCR Plugin documentation, changing "conection" to "connection." Signed-off-by: inpink <108166692+inpink@users.noreply.github.com> --- _tuning-your-cluster/replication-plugin/auto-follow.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/_tuning-your-cluster/replication-plugin/auto-follow.md b/_tuning-your-cluster/replication-plugin/auto-follow.md index 828b835387..92e7a6c144 100644 --- a/_tuning-your-cluster/replication-plugin/auto-follow.md +++ b/_tuning-your-cluster/replication-plugin/auto-follow.md @@ -98,9 +98,9 @@ To delete a replication rule, send the following request to the follower cluster ```bash curl -XDELETE -k -H 'Content-Type: application/json' -u 'admin:' 'https://localhost:9200/_plugins/_replication/_autofollow?pretty' -d ' { - "leader_alias" : "my-conection-alias", + "leader_alias" : "my-connection-alias", "name": "my-replication-rule" }' ``` -When you delete a replication rule, OpenSearch stops replicating *new* indexes that match the pattern, but existing indexes that the rule previously created remain read-only and continue to replicate. If you need to stop existing replication activity and open the indexes up for writes, use the [stop replication API operation]({{site.url}}{{site.baseurl}}/replication-plugin/api/#stop-replication). \ No newline at end of file +When you delete a replication rule, OpenSearch stops replicating *new* indexes that match the pattern, but existing indexes that the rule previously created remain read-only and continue to replicate. If you need to stop existing replication activity and open the indexes up for writes, use the [stop replication API operation]({{site.url}}{{site.baseurl}}/replication-plugin/api/#stop-replication). From 1867a65df2c8294057539d39913ebe2f230888e3 Mon Sep 17 00:00:00 2001 From: Noah Staveley <111019874+noahstaveley@users.noreply.github.com> Date: Mon, 30 Sep 2024 06:48:02 -0700 Subject: [PATCH 099/111] Update documentation to reflect k-NN FAISS AVX512 support (#8307) * AVX512 updates Signed-off-by: Noah Staveley * updatedwith correct version for AVX512 release Signed-off-by: Noah Staveley * change to reflect avx512/avx2 preference order Signed-off-by: Noah Staveley * change to knn-index. specified order of performance Signed-off-by: Noah Staveley * Update _search-plugins/knn/knn-index.md Signed-off-by: Noah Staveley * Update _search-plugins/knn/settings.md and knn-index.md Signed-off-by: Noah Staveley --------- Signed-off-by: Noah Staveley Co-authored-by: kolchfa-aws <105444904+kolchfa-aws@users.noreply.github.com> --- _search-plugins/knn/knn-index.md | 18 +++++++++++++----- _search-plugins/knn/settings.md | 1 + 2 files changed, 14 insertions(+), 5 deletions(-) diff --git a/_search-plugins/knn/knn-index.md b/_search-plugins/knn/knn-index.md index 15d660ca00..620b262cf9 100644 --- a/_search-plugins/knn/knn-index.md +++ b/_search-plugins/knn/knn-index.md @@ -51,7 +51,7 @@ Starting with k-NN plugin version 2.16, you can use `binary` vectors with the `f ## SIMD optimization for the Faiss engine -Starting with version 2.13, the k-NN plugin supports [Single Instruction Multiple Data (SIMD)](https://en.wikipedia.org/wiki/Single_instruction,_multiple_data) processing if the underlying hardware supports SIMD instructions (AVX2 on x64 architecture and Neon on ARM64 architecture). SIMD is supported by default on Linux machines only for the Faiss engine. SIMD architecture helps boost overall performance by improving indexing throughput and reducing search latency. +Starting with version 2.13, the k-NN plugin supports [Single Instruction Multiple Data (SIMD)](https://en.wikipedia.org/wiki/Single_instruction,_multiple_data) processing if the underlying hardware supports SIMD instructions (AVX2 on x64 architecture and Neon on ARM64 architecture). SIMD is supported by default on Linux machines only for the Faiss engine. SIMD architecture helps boost overall performance by improving indexing throughput and reducing search latency. Starting with version 2.18, the k-NN plugin supports AVX512 SIMD instructions on x64 architecture. SIMD optimization is applicable only if the vector dimension is a multiple of 8. {: .note} @@ -60,14 +60,22 @@ SIMD optimization is applicable only if the vector dimension is a multiple of 8. ### x64 architecture -For the x64 architecture, two different versions of the Faiss library are built and shipped with the artifact: +For x64 architecture, the following versions of the Faiss library are built and shipped with the artifact: - `libopensearchknn_faiss.so`: The non-optimized Faiss library without SIMD instructions. -- `libopensearchknn_faiss_avx2.so`: The Faiss library that contains AVX2 SIMD instructions. +- `libopensearchknn_faiss_avx512.so`: The Faiss library containing AVX512 SIMD instructions. +- `libopensearchknn_faiss_avx2.so`: The Faiss library containing AVX2 SIMD instructions. -If your hardware supports AVX2, the k-NN plugin loads the `libopensearchknn_faiss_avx2.so` library at runtime. +When using the Faiss library, the performance ranking is as follows: AVX512 > AVX2 > no optimization. +{: .note } + +If your hardware supports AVX512, the k-NN plugin loads the `libopensearchknn_faiss_avx512.so` library at runtime. + +If your hardware supports AVX2 but doesn't support AVX512, the k-NN plugin loads the `libopensearchknn_faiss_avx2.so` library at runtime. + +To disable the AVX512 and AVX2 SIMD instructions and load the non-optimized Faiss library (`libopensearchknn_faiss.so`), specify the `knn.faiss.avx512.disabled` and `knn.faiss.avx2.disabled` static settings as `true` in `opensearch.yml` (by default, both of these are `false`). -To disable AVX2 and load the non-optimized Faiss library (`libopensearchknn_faiss.so`), specify the `knn.faiss.avx2.disabled` static setting as `true` in `opensearch.yml` (default is `false`). Note that to update a static setting, you must stop the cluster, change the setting, and restart the cluster. For more information, see [Static settings]({{site.url}}{{site.baseurl}}/install-and-configure/configuring-opensearch/index/#static-settings). +Note that to update a static setting, you must stop the cluster, change the setting, and restart the cluster. For more information, see [Static settings]({{site.url}}{{site.baseurl}}/install-and-configure/configuring-opensearch/index/#static-settings). ### ARM64 architecture diff --git a/_search-plugins/knn/settings.md b/_search-plugins/knn/settings.md index 1b9aa3608c..e4731ec94c 100644 --- a/_search-plugins/knn/settings.md +++ b/_search-plugins/knn/settings.md @@ -27,6 +27,7 @@ Setting | Static/Dynamic | Default | Description `knn.model.index.number_of_replicas`| Dynamic | `1` | The number of replica shards to use for the model system index. Generally, in a multi-node cluster, this value should be at least 1 in order to increase stability. `knn.model.cache.size.limit` | Dynamic | `10%` | The model cache limit cannot exceed 25% of the JVM heap. `knn.faiss.avx2.disabled` | Static | `false` | A static setting that specifies whether to disable the SIMD-based `libopensearchknn_faiss_avx2.so` library and load the non-optimized `libopensearchknn_faiss.so` library for the Faiss engine on machines with x64 architecture. For more information, see [SIMD optimization for the Faiss engine]({{site.url}}{{site.baseurl}}/search-plugins/knn/knn-index/#simd-optimization-for-the-faiss-engine). +`knn.faiss.avx512.disabled` | Static | `false` | A static setting that specifies whether to disable the SIMD-based `libopensearchknn_faiss_avx512.so` library and load the `libopensearchknn_faiss_avx2.so` library or the non-optimized `libopensearchknn_faiss.so` library for the Faiss engine on machines with x64 architecture. For more information, see [SIMD optimization for the Faiss engine]({{site.url}}{{site.baseurl}}/search-plugins/knn/knn-index/#simd-optimization-for-the-faiss-engine). ## Index settings From 4bb49d37fc09b85cf918087f9d25b23f314d3562 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?=C4=90=E1=BB=97=20Tr=E1=BB=8Dng=20H=E1=BA=A3i?= <41283691+hainenber@users.noreply.github.com> Date: Tue, 1 Oct 2024 01:04:49 +0700 Subject: [PATCH 100/111] fix: correct json extension for sample ecommerce data (#8348) Signed-off-by: hainenber Co-authored-by: kolchfa-aws <105444904+kolchfa-aws@users.noreply.github.com> --- _getting-started/ingest-data.md | 10 +++++----- .../upgrade-opensearch/appendix/rolling-upgrade-lab.md | 8 ++++---- assets/examples/{ecommerce.json => ecommerce.ndjson} | 0 3 files changed, 9 insertions(+), 9 deletions(-) rename assets/examples/{ecommerce.json => ecommerce.ndjson} (100%) diff --git a/_getting-started/ingest-data.md b/_getting-started/ingest-data.md index 73cf1502f7..866a88e68a 100644 --- a/_getting-started/ingest-data.md +++ b/_getting-started/ingest-data.md @@ -50,32 +50,32 @@ Use the following steps to create a sample index and define field mappings for t ``` {% include copy.html %} -1. Download [ecommerce.json](https://github.com/opensearch-project/documentation-website/blob/{{site.opensearch_major_minor_version}}/assets/examples/ecommerce.json). This file contains the index data formatted so that it can be ingested by the Bulk API: +1. Download [ecommerce.ndjson](https://github.com/opensearch-project/documentation-website/blob/{{site.opensearch_major_minor_version}}/assets/examples/ecommerce.ndjson). This file contains the index data formatted so that it can be ingested by the Bulk API: To use cURL, send the following request: ```bash - curl -O https://raw.githubusercontent.com/opensearch-project/documentation-website/{{site.opensearch_major_minor_version}}/assets/examples/ecommerce.json + curl -O https://raw.githubusercontent.com/opensearch-project/documentation-website/{{site.opensearch_major_minor_version}}/assets/examples/ecommerce.ndjson ``` {% include copy.html %} To use wget, send the following request: ``` - wget https://raw.githubusercontent.com/opensearch-project/documentation-website/{{site.opensearch_major_minor_version}}/assets/examples/ecommerce.json + wget https://raw.githubusercontent.com/opensearch-project/documentation-website/{{site.opensearch_major_minor_version}}/assets/examples/ecommerce.ndjson ``` {% include copy.html %} 1. Define the field mappings provided in the mapping file: ```bash - curl -H "Content-Type: application/x-ndjson" -X PUT "https://localhost:9200/ecommerce" -ku admin: --data-binary "@ecommerce-field_mappings.json" + curl -H "Content-Type: application/json" -X PUT "https://localhost:9200/ecommerce" -ku admin: --data-binary "@ecommerce-field_mappings.json" ``` {% include copy.html %} 1. Upload the documents using the Bulk API: ```bash - curl -H "Content-Type: application/x-ndjson" -X PUT "https://localhost:9200/ecommerce/_bulk" -ku admin: --data-binary "@ecommerce.json" + curl -H "Content-Type: application/x-ndjson" -X PUT "https://localhost:9200/ecommerce/_bulk" -ku admin: --data-binary "@ecommerce.ndjson" ``` {% include copy.html %} diff --git a/_install-and-configure/upgrade-opensearch/appendix/rolling-upgrade-lab.md b/_install-and-configure/upgrade-opensearch/appendix/rolling-upgrade-lab.md index 924900dbc8..c467601f1c 100644 --- a/_install-and-configure/upgrade-opensearch/appendix/rolling-upgrade-lab.md +++ b/_install-and-configure/upgrade-opensearch/appendix/rolling-upgrade-lab.md @@ -169,12 +169,12 @@ This section can be broken down into two parts: {% include copy.html %} 1. Next, download the bulk data that you will ingest into this index: ```bash - wget https://raw.githubusercontent.com/opensearch-project/documentation-website/main/assets/examples/ecommerce.json + wget https://raw.githubusercontent.com/opensearch-project/documentation-website/main/assets/examples/ecommerce.ndjson ``` {% include copy.html %} 1. Use the [Create index]({{site.url}}{{site.baseurl}}/api-reference/index-apis/create-index/) API to create an index using the mappings defined in `ecommerce-field_mappings.json`: ```bash - curl -H "Content-Type: application/x-ndjson" \ + curl -H "Content-Type: application/json" \ -X PUT "https://localhost:9201/ecommerce?pretty" \ --data-binary "@ecommerce-field_mappings.json" \ -ku admin: @@ -188,11 +188,11 @@ This section can be broken down into two parts: "index" : "ecommerce" } ``` -1. Use the [Bulk]({{site.url}}{{site.baseurl}}/api-reference/document-apis/bulk/) API to add data to the new ecommerce index from `ecommerce.json`: +1. Use the [Bulk]({{site.url}}{{site.baseurl}}/api-reference/document-apis/bulk/) API to add data to the new ecommerce index from `ecommerce.ndjson`: ```bash curl -H "Content-Type: application/x-ndjson" \ -X PUT "https://localhost:9201/ecommerce/_bulk?pretty" \ - --data-binary "@ecommerce.json" \ + --data-binary "@ecommerce.ndjson" \ -ku admin: ``` {% include copy.html %} diff --git a/assets/examples/ecommerce.json b/assets/examples/ecommerce.ndjson similarity index 100% rename from assets/examples/ecommerce.json rename to assets/examples/ecommerce.ndjson From 5e5b6d145a218a690d3e798d4b20594bb13cf53a Mon Sep 17 00:00:00 2001 From: kolchfa-aws <105444904+kolchfa-aws@users.noreply.github.com> Date: Tue, 1 Oct 2024 10:39:11 -0400 Subject: [PATCH 101/111] Make search results page responsive (#8397) Signed-off-by: Fanit Kolchina --- _layouts/search_layout.html | 99 ++++++++++++++++++++----------------- _sass/custom/custom.scss | 97 ++++++++++++++++++++++++++++-------- assets/js/search.js | 21 +++----- 3 files changed, 138 insertions(+), 79 deletions(-) diff --git a/_layouts/search_layout.html b/_layouts/search_layout.html index 67e877fcb8..a2702573ae 100644 --- a/_layouts/search_layout.html +++ b/_layouts/search_layout.html @@ -20,59 +20,70 @@
- - - - Results Page Head from layout - + + + + Results Page Head from layout + + + -
+
+ {% include footer.html %} diff --git a/_sass/custom/custom.scss b/_sass/custom/custom.scss index 3a9dcc5e6d..b3ee3c3775 100755 --- a/_sass/custom/custom.scss +++ b/_sass/custom/custom.scss @@ -1039,14 +1039,25 @@ body { display: flex; align-items: flex-start; justify-content: center; - gap: 20px; - margin: 0 auto; + gap: 0; + border-top: 1px solid #eeebee; + flex-direction: column; + @include mq(md) { + flex-direction: row; + gap: 20px + } } .search-page--sidebar { - flex: 1; - max-width: 200px; - flex: 0 0 200px; + max-width: 100%; + order: 2; + margin-top: 1rem; + color: $grey-dk-300; + @include mq(md) { + flex: 1; + max-width: 200px; + margin-top: 3rem; + } } .search-page--sidebar--category-filter--checkbox-child { @@ -1054,52 +1065,96 @@ body { } .search-page--results { - flex: 3; display: flex; flex-direction: column; align-items: center; - max-width: 60%; + width: 100%; + max-width: 100%; + order: 3; + @include mq(md) { + flex: 3; + max-width: 60%; + } } -.search-page--results--input { - width: 100%; +.search-page--results--wrapper { position: relative; + display: flex; + width: 100%; + background-color: white; + margin: 0 auto 2rem; + max-width: 800px; } .search-page--results--input-box { width: 100%; - padding: 10px; - margin-bottom: 20px; - border: 1px solid #ccc; + padding: 10px 40px 10px 10px; + border: 1px solid $grey-lt-300; border-radius: 4px; + color: $grey-dk-300; } .search-page--results--input-icon { position: absolute; - top: 35%; - right: 10px; - transform: translateY(-50%); + right: 12px; + align-self: center; pointer-events: none; - color: #333; + color: $grey-dk-000; } -.search-page--results--diplay { +.search-page--results--display { width: 100%; position: relative; flex-flow: column nowrap; + margin-top: 1rem; + @media (max-width: $content-width) { + margin-top: 0.5rem; + } } -.search-page--results--diplay--header { +.search-page--results--display--header { text-align: center; - margin-bottom: 20px; background-color: transparent; + color: $grey-dk-300; + margin-bottom: 1rem; + margin-top: 1.5rem; + padding-bottom: 1rem; + border-bottom: 1px solid $blue-dk-100; + font-size: 20px; + @include mq(md) { + font-size: 1.5rem; + } } -.search-page--results--diplay--container--item { - margin-bottom: 1%; +.search-page--results--display--container--item { + margin-bottom: 2rem; display: block; } +.search-page--results--no-results { + padding: 1rem; + display: block; + font-size: 1rem; + font-weight: normal; +} + +.search-page--results--display--container--item--link { + font-family: "Open Sans Condensed", Impact, "Franklin Gothic Bold", sans-serif; + font-size: 1.2rem; + font-weight: bold; + display: block; + text-decoration: underline; + text-underline-offset: 5px; + text-decoration-color: $grey-lt-300; + &:hover { + text-decoration-color: $blue-100; + } +} + +.category-checkbox { + margin-right: 4px; +} + @mixin body-text($color: #000) { color: $color; font-family: 'Open Sans'; diff --git a/assets/js/search.js b/assets/js/search.js index 4d4fce62f3..86970d9544 100644 --- a/assets/js/search.js +++ b/assets/js/search.js @@ -173,7 +173,10 @@ const showNoResults = () => { emptyResults(); - elResults.appendChild(document.createRange().createContextualFragment('No results found!')); + const resultElement = document.createElement('div'); + resultElement.classList.add('search-page--results--no-results'); + resultElement.appendChild(document.createRange().createContextualFragment('No results found.')); + elResults.appendChild(resultElement); showResults(); elSpinner?.classList.remove(CLASSNAME_SPINNING); }; @@ -278,8 +281,6 @@ window.doResultsPageSearch = async (query, type, version) => { - console.log("Running results page search!"); - const searchResultsContainer = document.getElementById('searchPageResultsContainer'); try { @@ -291,7 +292,7 @@ window.doResultsPageSearch = async (query, type, version) => { if (data.results && data.results.length > 0) { data.results.forEach(result => { const resultElement = document.createElement('div'); - resultElement.classList.add('search-page--results--diplay--container--item'); + resultElement.classList.add('search-page--results--display--container--item'); const contentCite = document.createElement('cite'); const crumbs = [...result.ancestors]; @@ -302,11 +303,9 @@ window.doResultsPageSearch = async (query, type, version) => { const titleLink = document.createElement('a'); titleLink.href = result.url; + titleLink.classList.add('search-page--results--display--container--item--link'); titleLink.textContent = result.title; - titleLink.style.fontSize = '1.5em'; - titleLink.style.fontWeight = 'bold'; - titleLink.style.display = 'block'; - + const contentSpan = document.createElement('span'); contentSpan.textContent = result.content; contentSpan.style.display = 'block'; @@ -317,16 +316,10 @@ window.doResultsPageSearch = async (query, type, version) => { // Append the result element to the searchResultsContainer searchResultsContainer.appendChild(resultElement); - - const breakline = document.createElement('hr'); - breakline.style.borderTop = '.5px solid #ccc'; - breakline.style.margin = 'auto'; - searchResultsContainer.appendChild(breakline); }); } else { const noResultsElement = document.createElement('div'); noResultsElement.textContent = 'No results found.'; - noResultsElement.style.fontSize = '2em'; searchResultsContainer.appendChild(noResultsElement); } } catch (error) { From 84c12657648ef91abcfaafdfd8efa21a159c162b Mon Sep 17 00:00:00 2001 From: Naarcha-AWS <97990722+Naarcha-AWS@users.noreply.github.com> Date: Tue, 1 Oct 2024 12:09:38 -0500 Subject: [PATCH 102/111] Fix broken links (#8420) Signed-off-by: Archer --- _troubleshoot/tls.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/_troubleshoot/tls.md b/_troubleshoot/tls.md index 93e9a2c490..6c777ad5b8 100644 --- a/_troubleshoot/tls.md +++ b/_troubleshoot/tls.md @@ -207,7 +207,7 @@ plugins.security.ssl.http.enabled_protocols: TLS relies on the server and client negotiating a common cipher suite. Depending on your system, the available ciphers will vary. They depend on the JDK or OpenSSL version you're using, and whether or not the `JCE Unlimited Strength Jurisdiction Policy Files` are installed. -For legal reasons, the JDK does not include strong ciphers like AES256. In order to use strong ciphers you need to download and install the [Java Cryptography Extension (JCE) Unlimited Strength Jurisdiction Policy Files](https://www.oracle.com/technetwork/java/javase/downloads/jce8-download-2133166.html). If you don't have them installed, you might see an error message on startup: +For legal reasons, the JDK does not include strong ciphers like AES256. In order to use strong ciphers you need to download and install the [Java Cryptography Extension (JCE) Unlimited Strength Jurisdiction Policy Files](https://www.oracle.com/java/technologies/javase-jce8-downloads.html). If you don't have them installed, you might see an error message on startup: ``` [INFO ] AES-256 not supported, max key length for AES is 128 bit. From 2bb90a94a3aef92e9b7a52bdcbf34cb12525df60 Mon Sep 17 00:00:00 2001 From: kolchfa-aws <105444904+kolchfa-aws@users.noreply.github.com> Date: Tue, 1 Oct 2024 17:16:51 -0400 Subject: [PATCH 103/111] Add 2.17.1 to version history (#8423) Signed-off-by: Fanit Kolchina --- _about/version-history.md | 1 + 1 file changed, 1 insertion(+) diff --git a/_about/version-history.md b/_about/version-history.md index 47253558e9..b8b9e99309 100644 --- a/_about/version-history.md +++ b/_about/version-history.md @@ -9,6 +9,7 @@ permalink: /version-history/ OpenSearch version | Release highlights | Release date :--- | :--- | :--- +[2.17.1](https://github.com/opensearch-project/opensearch-build/blob/main/release-notes/opensearch-release-notes-2.17.1.md) | Includes bug fixes for ML Commons, anomaly detection, k-NN, and security analytics. Adds various infrastructure and maintenance updates. For a full list of release highlights, see the Release Notes. | 1 October 2024 [2.17.0](https://github.com/opensearch-project/opensearch-build/blob/main/release-notes/opensearch-release-notes-2.17.0.md) | Includes disk-optimized vector search, binary quantization, and byte vector encoding in k-NN. Adds asynchronous batch ingestion for ML tasks. Provides search and query performance enhancements and a new custom trace source in trace analytics. Includes application-based configuration templates. For a full list of release highlights, see the Release Notes. | 17 September 2024 [2.16.0](https://github.com/opensearch-project/opensearch-build/blob/main/release-notes/opensearch-release-notes-2.16.0.md) | Includes built-in byte vector quantization and binary vector support in k-NN. Adds new sort, split, and ML inference search processors for search pipelines. Provides application-based configuration templates and additional plugins to integrate multiple data sources in OpenSearch Dashboards. Includes an experimental Batch Predict ML Commons API. For a full list of release highlights, see the Release Notes. | 06 August 2024 [2.15.0](https://github.com/opensearch-project/opensearch-build/blob/main/release-notes/opensearch-release-notes-2.15.0.md) | Includes parallel ingestion processing, SIMD support for exact search, and the ability to disable doc values for the k-NN field. Adds wildcard and derived field types. Improves performance for single-cardinality aggregations, rolling upgrades to remote-backed clusters, and more metrics for top N queries. For a full list of release highlights, see the Release Notes. | 25 June 2024 From f4a5744411bcfd606aba69e62894c63f2da473c4 Mon Sep 17 00:00:00 2001 From: Reiya Downs <77595987+Reiyadowns@users.noreply.github.com> Date: Thu, 3 Oct 2024 15:31:04 -0400 Subject: [PATCH 104/111] Add 'Cleanup Snapshot' API Documentation (#8373) * Add documentation for Clone Snapshot API Signed-off-by: Reiya Downs * Add documentation for Cleanup Snapshot Repository API Signed-off-by: Reiya Downs * Fixed minor errors, light editing Signed-off-by: Reiya Downs * Remove erroneous clone-repository file Signed-off-by: Reiya Downs * Update cleanup-snapshot-repository.md Signed-off-by: Naarcha-AWS <97990722+Naarcha-AWS@users.noreply.github.com> * Apply suggestions from code review Co-authored-by: Nathan Bower Signed-off-by: Naarcha-AWS <97990722+Naarcha-AWS@users.noreply.github.com> * Apply suggestions from code review Signed-off-by: Naarcha-AWS <97990722+Naarcha-AWS@users.noreply.github.com> --------- Signed-off-by: Reiya Downs Signed-off-by: Naarcha-AWS <97990722+Naarcha-AWS@users.noreply.github.com> Co-authored-by: Naarcha-AWS <97990722+Naarcha-AWS@users.noreply.github.com> Co-authored-by: Nathan Bower --- .../snapshots/cleanup-snapshot-repository.md | 64 +++++++++++++++++++ 1 file changed, 64 insertions(+) create mode 100644 _api-reference/snapshots/cleanup-snapshot-repository.md diff --git a/_api-reference/snapshots/cleanup-snapshot-repository.md b/_api-reference/snapshots/cleanup-snapshot-repository.md new file mode 100644 index 0000000000..be6e582d22 --- /dev/null +++ b/_api-reference/snapshots/cleanup-snapshot-repository.md @@ -0,0 +1,64 @@ +--- +layout: default +title: Cleanup Snapshot Repository +parent: Snapshot APIs +nav_order: 11 +--- + +# Cleanup Snapshot Repository +Introduced 1.0 +{: .label .label-purple } + +The Cleanup Snapshot Repository API clears a snapshot repository of data no longer referenced by any existing snapshot. + +## Path and HTTP methods + +```json +POST /_snapshot//_cleanup +``` +{% include copy.html %} + + +## Path parameters + +| Parameter | Data type | Description | +| :--- | :--- | :--- | +| `repository` | String | The name of the snapshot repository. | + +## Query parameters + +The following table lists the available query parameters. All query parameters are optional. + +| Parameter | Data type | Description | +| :--- | :--- | :--- | +| `cluster_manager_timeout` | Time | The amount of time to wait for a response from the cluster manager node. Formerly called `master_timeout`. Optional. Default is 30 seconds. | +| `timeout` | Time | The amount of time to wait for the operation to complete. Optional.| + +## Example request + +The following request removes all stale data from the repository `my_backup`: + +```json +POST /_snapshot/my_backup/_cleanup +``` +{% include copy-curl.html %} + + +## Example response + +```json +{ + "results":{ + "deleted_bytes":40, + "deleted_blobs":8 + } +} +``` + +## Response body fields + +| Field | Data type | Description | +| :--- | :--- | :--- | +| `deleted_bytes` | Integer | The number of bytes made available in the snapshot after data deletion. | +| `deleted_blobs` | Integer | The number of binary large objects (BLOBs) cleared from the repository by the request. | + From 59bea717e85b3bef00bc9766353442c753057526 Mon Sep 17 00:00:00 2001 From: Naveen Tatikonda Date: Thu, 3 Oct 2024 19:36:37 -0500 Subject: [PATCH 105/111] Change request type of k-NN clear cache api from GET to POST (#8466) Signed-off-by: Naveen Tatikonda --- _search-plugins/knn/api.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/_search-plugins/knn/api.md b/_search-plugins/knn/api.md index 1a6c970640..d927bf1c35 100644 --- a/_search-plugins/knn/api.md +++ b/_search-plugins/knn/api.md @@ -185,7 +185,7 @@ This API operation only works with indexes created using the `nmslib` and `faiss The following request evicts the native library indexes of three indexes from the cache: ```json -GET /_plugins/_knn/clear_cache/index1,index2,index3?pretty +POST /_plugins/_knn/clear_cache/index1,index2,index3?pretty { "_shards" : { "total" : 6, @@ -200,7 +200,7 @@ The `total` parameter indicates the number of shards that the API attempted to c The k-NN clear cache API can be used with index patterns to clear one or more indexes that match the given pattern from the cache, as shown in the following example: ```json -GET /_plugins/_knn/clear_cache/index*?pretty +POST /_plugins/_knn/clear_cache/index*?pretty { "_shards" : { "total" : 6, From 8feea9ef134a50c85b984402d161518856f0cb44 Mon Sep 17 00:00:00 2001 From: Ian Menendez <61611304+IanMenendez@users.noreply.github.com> Date: Tue, 8 Oct 2024 10:05:10 -0300 Subject: [PATCH 106/111] [Feature]: add ignore missing to text chunking processor (#8266) * feat: add ignore missing to text chunking processor Signed-off-by: Ian Menendez * Update _ingest-pipelines/processors/text-chunking.md Signed-off-by: kolchfa-aws <105444904+kolchfa-aws@users.noreply.github.com> --------- Signed-off-by: Ian Menendez Signed-off-by: kolchfa-aws <105444904+kolchfa-aws@users.noreply.github.com> Co-authored-by: kolchfa-aws <105444904+kolchfa-aws@users.noreply.github.com> --- _ingest-pipelines/processors/text-chunking.md | 21 ++++++++++--------- 1 file changed, 11 insertions(+), 10 deletions(-) diff --git a/_ingest-pipelines/processors/text-chunking.md b/_ingest-pipelines/processors/text-chunking.md index 4dccca4926..0141ba1564 100644 --- a/_ingest-pipelines/processors/text-chunking.md +++ b/_ingest-pipelines/processors/text-chunking.md @@ -31,16 +31,17 @@ The following is the syntax for the `text_chunking` processor: The following table lists the required and optional parameters for the `text_chunking` processor. -| Parameter | Data type | Required/Optional | Description | -|:---|:---|:---|:---| -| `field_map` | Object | Required | Contains key-value pairs that specify the mapping of a text field to the output field. | -| `field_map.` | String | Required | The name of the field from which to obtain text for generating chunked passages. | -| `field_map.` | String | Required | The name of the field in which to store the chunked results. | -| `algorithm` | Object | Required | Contains at most one key-value pair that specifies the chunking algorithm and parameters. | -| `algorithm.` | String | Optional | The name of the chunking algorithm. Valid values are [`fixed_token_length`](#fixed-token-length-algorithm) or [`delimiter`](#delimiter-algorithm). Default is `fixed_token_length`. | -| `algorithm.` | Object | Optional | The parameters for the chunking algorithm. By default, contains the default parameters of the `fixed_token_length` algorithm. | -| `description` | String | Optional | A brief description of the processor. | -| `tag` | String | Optional | An identifier tag for the processor. Useful when debugging in order to distinguish between processors of the same type. | +| Parameter | Data type | Required/Optional | Description | +|:----------------------------|:----------|:---|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| +| `field_map` | Object | Required | Contains key-value pairs that specify the mapping of a text field to the output field. | +| `field_map.` | String | Required | The name of the field from which to obtain text for generating chunked passages. | +| `field_map.` | String | Required | The name of the field in which to store the chunked results. | +| `algorithm` | Object | Required | Contains at most one key-value pair that specifies the chunking algorithm and parameters. | +| `algorithm.` | String | Optional | The name of the chunking algorithm. Valid values are [`fixed_token_length`](#fixed-token-length-algorithm) or [`delimiter`](#delimiter-algorithm). Default is `fixed_token_length`. | +| `algorithm.` | Object | Optional | The parameters for the chunking algorithm. By default, contains the default parameters of the `fixed_token_length` algorithm. | +| `ignore_missing` | Boolean | Optional | If `true`, empty fields are excluded from the output. If `false`, the output will contain an empty list for every empty field. Default is `false`. | +| `description` | String | Optional | A brief description of the processor. | +| `tag` | String | Optional | An identifier tag for the processor. Useful when debugging in order to distinguish between processors of the same type. | To perform chunking on nested fields, specify `input_field` and `output_field` values as JSON objects. Dot paths of nested fields are not supported. For example, use `"field_map": { "foo": { "bar": "bar_chunk"} }` instead of `"field_map": { "foo.bar": "foo.bar_chunk"}`. {: .note} From ef44d301ba04d1e641bd675a2686e65aa22b2690 Mon Sep 17 00:00:00 2001 From: kolchfa-aws <105444904+kolchfa-aws@users.noreply.github.com> Date: Tue, 8 Oct 2024 14:29:19 -0400 Subject: [PATCH 107/111] Refactor analyzers section (#8477) Signed-off-by: Fanit Kolchina --- _analyzers/index-analyzers.md | 1 + _analyzers/index.md | 16 +++---------- _analyzers/language-analyzers.md | 3 ++- _analyzers/search-analyzers.md | 3 ++- _analyzers/supported-analyzers/index.md | 32 +++++++++++++++++++++++++ _analyzers/token-filters/index.md | 2 ++ _analyzers/tokenizers/index.md | 2 ++ 7 files changed, 44 insertions(+), 15 deletions(-) create mode 100644 _analyzers/supported-analyzers/index.md diff --git a/_analyzers/index-analyzers.md b/_analyzers/index-analyzers.md index 72332758d0..3c40755502 100644 --- a/_analyzers/index-analyzers.md +++ b/_analyzers/index-analyzers.md @@ -2,6 +2,7 @@ layout: default title: Index analyzers nav_order: 20 +parent: Analyzers --- # Index analyzers diff --git a/_analyzers/index.md b/_analyzers/index.md index 9b999e5c3d..fec61792b2 100644 --- a/_analyzers/index.md +++ b/_analyzers/index.md @@ -45,20 +45,9 @@ An analyzer must contain exactly one tokenizer and may contain zero or more char There is also a special type of analyzer called a ***normalizer***. A normalizer is similar to an analyzer except that it does not contain a tokenizer and can only include specific types of character filters and token filters. These filters can perform only character-level operations, such as character or pattern replacement, and cannot perform operations on the token as a whole. This means that replacing a token with a synonym or stemming is not supported. See [Normalizers]({{site.url}}{{site.baseurl}}/analyzers/normalizers/) for further details. -## Built-in analyzers +## Supported analyzers -The following table lists the built-in analyzers that OpenSearch provides. The last column of the table contains the result of applying the analyzer to the string `It’s fun to contribute a brand-new PR or 2 to OpenSearch!`. - -Analyzer | Analysis performed | Analyzer output -:--- | :--- | :--- -**Standard** (default) | - Parses strings into tokens at word boundaries
- Removes most punctuation
- Converts tokens to lowercase | [`it’s`, `fun`, `to`, `contribute`, `a`,`brand`, `new`, `pr`, `or`, `2`, `to`, `opensearch`] -**Simple** | - Parses strings into tokens on any non-letter character
- Removes non-letter characters
- Converts tokens to lowercase | [`it`, `s`, `fun`, `to`, `contribute`, `a`,`brand`, `new`, `pr`, `or`, `to`, `opensearch`] -**Whitespace** | - Parses strings into tokens on white space | [`It’s`, `fun`, `to`, `contribute`, `a`,`brand-new`, `PR`, `or`, `2`, `to`, `OpenSearch!`] -**Stop** | - Parses strings into tokens on any non-letter character
- Removes non-letter characters
- Removes stop words
- Converts tokens to lowercase | [`s`, `fun`, `contribute`, `brand`, `new`, `pr`, `opensearch`] -**Keyword** (no-op) | - Outputs the entire string unchanged | [`It’s fun to contribute a brand-new PR or 2 to OpenSearch!`] -**Pattern** | - Parses strings into tokens using regular expressions
- Supports converting strings to lowercase
- Supports removing stop words | [`it`, `s`, `fun`, `to`, `contribute`, `a`,`brand`, `new`, `pr`, `or`, `2`, `to`, `opensearch`] -[**Language**]({{site.url}}{{site.baseurl}}/analyzers/language-analyzers/) | Performs analysis specific to a certain language (for example, `english`). | [`fun`, `contribut`, `brand`, `new`, `pr`, `2`, `opensearch`] -**Fingerprint** | - Parses strings on any non-letter character
- Normalizes characters by converting them to ASCII
- Converts tokens to lowercase
- Sorts, deduplicates, and concatenates tokens into a single token
- Supports removing stop words | [`2 a brand contribute fun it's new opensearch or pr to`]
Note that the apostrophe was converted to its ASCII counterpart. +For a list of supported analyzers, see [Analyzers]({{site.url}}{{site.baseurl}}/analyzers/supported-analyzers/index/). ## Custom analyzers @@ -195,3 +184,4 @@ Normalization ensures that searches are not limited to exact term matches, allow ## Next steps - Learn more about specifying [index analyzers]({{site.url}}{{site.baseurl}}/analyzers/index-analyzers/) and [search analyzers]({{site.url}}{{site.baseurl}}/analyzers/search-analyzers/). +- See the list of [supported analyzers]({{site.url}}{{site.baseurl}}/analyzers/supported-analyzers/index/). \ No newline at end of file diff --git a/_analyzers/language-analyzers.md b/_analyzers/language-analyzers.md index f5a2f18cb3..6b5834530e 100644 --- a/_analyzers/language-analyzers.md +++ b/_analyzers/language-analyzers.md @@ -1,7 +1,8 @@ --- layout: default title: Language analyzers -nav_order: 10 +nav_order: 100 +parent: Analyzers redirect_from: - /query-dsl/analyzers/language-analyzers/ --- diff --git a/_analyzers/search-analyzers.md b/_analyzers/search-analyzers.md index b47e739d28..52159edb70 100644 --- a/_analyzers/search-analyzers.md +++ b/_analyzers/search-analyzers.md @@ -2,6 +2,7 @@ layout: default title: Search analyzers nav_order: 30 +parent: Analyzers --- # Search analyzers @@ -42,7 +43,7 @@ GET shakespeare/_search ``` {% include copy-curl.html %} -Valid values for [built-in analyzers]({{site.url}}{{site.baseurl}}/analyzers/index#built-in-analyzers) are `standard`, `simple`, `whitespace`, `stop`, `keyword`, `pattern`, `fingerprint`, or any supported [language analyzer]({{site.url}}{{site.baseurl}}/analyzers/language-analyzers/). +For more information about supported analyzers, see [Analyzers]({{site.url}}{{site.baseurl}}/analyzers/supported-analyzers/index/). ## Specifying a search analyzer for a field diff --git a/_analyzers/supported-analyzers/index.md b/_analyzers/supported-analyzers/index.md new file mode 100644 index 0000000000..af6ce6c3a6 --- /dev/null +++ b/_analyzers/supported-analyzers/index.md @@ -0,0 +1,32 @@ +--- +layout: default +title: Analyzers +nav_order: 40 +has_children: true +has_toc: false +redirect_from: + - /analyzers/supported-analyzers/index/ +--- + +# Analyzers + +The following sections list all analyzers that OpenSearch supports. + +## Built-in analyzers + +The following table lists the built-in analyzers that OpenSearch provides. The last column of the table contains the result of applying the analyzer to the string `It’s fun to contribute a brand-new PR or 2 to OpenSearch!`. + +Analyzer | Analysis performed | Analyzer output +:--- | :--- | :--- +**Standard** (default) | - Parses strings into tokens at word boundaries
- Removes most punctuation
- Converts tokens to lowercase | [`it’s`, `fun`, `to`, `contribute`, `a`,`brand`, `new`, `pr`, `or`, `2`, `to`, `opensearch`] +**Simple** | - Parses strings into tokens on any non-letter character
- Removes non-letter characters
- Converts tokens to lowercase | [`it`, `s`, `fun`, `to`, `contribute`, `a`,`brand`, `new`, `pr`, `or`, `to`, `opensearch`] +**Whitespace** | - Parses strings into tokens on white space | [`It’s`, `fun`, `to`, `contribute`, `a`,`brand-new`, `PR`, `or`, `2`, `to`, `OpenSearch!`] +**Stop** | - Parses strings into tokens on any non-letter character
- Removes non-letter characters
- Removes stop words
- Converts tokens to lowercase | [`s`, `fun`, `contribute`, `brand`, `new`, `pr`, `opensearch`] +**Keyword** (no-op) | - Outputs the entire string unchanged | [`It’s fun to contribute a brand-new PR or 2 to OpenSearch!`] +**Pattern** | - Parses strings into tokens using regular expressions
- Supports converting strings to lowercase
- Supports removing stop words | [`it`, `s`, `fun`, `to`, `contribute`, `a`,`brand`, `new`, `pr`, `or`, `2`, `to`, `opensearch`] +[**Language**]({{site.url}}{{site.baseurl}}/analyzers/language-analyzers/) | Performs analysis specific to a certain language (for example, `english`). | [`fun`, `contribut`, `brand`, `new`, `pr`, `2`, `opensearch`] +**Fingerprint** | - Parses strings on any non-letter character
- Normalizes characters by converting them to ASCII
- Converts tokens to lowercase
- Sorts, deduplicates, and concatenates tokens into a single token
- Supports removing stop words | [`2 a brand contribute fun it's new opensearch or pr to`]
Note that the apostrophe was converted to its ASCII counterpart. + +## Language analyzers + +OpenSearch supports analyzers for various languages. For more information, see [Language analyzers]({{site.url}}{{site.baseurl}}/analyzers/language-analyzers/). \ No newline at end of file diff --git a/_analyzers/token-filters/index.md b/_analyzers/token-filters/index.md index 86925123b8..d78ffb42a0 100644 --- a/_analyzers/token-filters/index.md +++ b/_analyzers/token-filters/index.md @@ -4,6 +4,8 @@ title: Token filters nav_order: 70 has_children: true has_toc: false +redirect_from: + - /analyzers/token-filters/index/ --- # Token filters diff --git a/_analyzers/tokenizers/index.md b/_analyzers/tokenizers/index.md index d401851f60..e5ac796c12 100644 --- a/_analyzers/tokenizers/index.md +++ b/_analyzers/tokenizers/index.md @@ -4,6 +4,8 @@ title: Tokenizers nav_order: 60 has_children: false has_toc: false +redirect_from: + - /analyzers/tokenizers/index/ --- # Tokenizers From c4d94fec868dfae67cec1deb899838a2420efb1e Mon Sep 17 00:00:00 2001 From: Naarcha-AWS <97990722+Naarcha-AWS@users.noreply.github.com> Date: Tue, 8 Oct 2024 13:58:36 -0500 Subject: [PATCH 108/111] Streamline Benchmark User Guide IA (#8387) * Rework Becnhmark user guide IA Signed-off-by: Archer * More IA tweaks Signed-off-by: Archer * Add index pages. Signed-off-by: Archer * Add redirects Signed-off-by: Archer * Fix redirects. Add index page. Signed-off-by: Archer * Fix link Signed-off-by: Archer --------- Signed-off-by: Archer --- _benchmark/glossary.md | 2 +- .../configuring-benchmark.md | 6 ++++-- .../user-guide/install-and-configure/index.md | 12 +++++++++++ .../installing-benchmark.md | 6 ++++-- .../distributed-load.md | 4 ++-- .../user-guide/optimizing-benchmarks/index.md | 11 ++++++++++ .../target-throughput.md | 5 ++++- _benchmark/user-guide/telemetry.md | 8 ------- .../user-guide/understanding-results/index.md | 12 +++++++++++ .../summary-reports.md} | 8 +++++-- .../understanding-results/telemetry.md | 21 +++++++++++++++++++ .../understanding-workloads/index.md | 2 +- .../contributing-workloads.md | 5 ++++- .../creating-custom-workloads.md | 3 ++- .../finetune-workloads.md} | 5 ++++- .../working-with-workloads/index.md | 16 ++++++++++++++ .../running-workloads.md | 5 ++++- 17 files changed, 108 insertions(+), 23 deletions(-) rename _benchmark/user-guide/{ => install-and-configure}/configuring-benchmark.md (98%) create mode 100644 _benchmark/user-guide/install-and-configure/index.md rename _benchmark/user-guide/{ => install-and-configure}/installing-benchmark.md (98%) rename _benchmark/user-guide/{ => optimizing-benchmarks}/distributed-load.md (98%) create mode 100644 _benchmark/user-guide/optimizing-benchmarks/index.md rename _benchmark/user-guide/{ => optimizing-benchmarks}/target-throughput.md (97%) delete mode 100644 _benchmark/user-guide/telemetry.md create mode 100644 _benchmark/user-guide/understanding-results/index.md rename _benchmark/user-guide/{understanding-results.md => understanding-results/summary-reports.md} (98%) create mode 100644 _benchmark/user-guide/understanding-results/telemetry.md rename _benchmark/user-guide/{ => working-with-workloads}/contributing-workloads.md (97%) rename _benchmark/user-guide/{ => working-with-workloads}/creating-custom-workloads.md (99%) rename _benchmark/user-guide/{finetine-workloads.md => working-with-workloads/finetune-workloads.md} (97%) create mode 100644 _benchmark/user-guide/working-with-workloads/index.md rename _benchmark/user-guide/{ => working-with-workloads}/running-workloads.md (99%) diff --git a/_benchmark/glossary.md b/_benchmark/glossary.md index f86591d3d9..a1d2335b8c 100644 --- a/_benchmark/glossary.md +++ b/_benchmark/glossary.md @@ -1,7 +1,7 @@ --- layout: default title: Glossary -nav_order: 10 +nav_order: 100 --- # OpenSearch Benchmark glossary diff --git a/_benchmark/user-guide/configuring-benchmark.md b/_benchmark/user-guide/install-and-configure/configuring-benchmark.md similarity index 98% rename from _benchmark/user-guide/configuring-benchmark.md rename to _benchmark/user-guide/install-and-configure/configuring-benchmark.md index 2be467d587..59ac13a83c 100644 --- a/_benchmark/user-guide/configuring-benchmark.md +++ b/_benchmark/user-guide/install-and-configure/configuring-benchmark.md @@ -1,10 +1,12 @@ --- layout: default -title: Configuring OpenSearch Benchmark +title: Configuring nav_order: 7 -parent: User guide +grand_parent: User guide +parent: Install and configure redirect_from: - /benchmark/configuring-benchmark/ + - /benchmark/user-guide/configuring-benchmark/ --- # Configuring OpenSearch Benchmark diff --git a/_benchmark/user-guide/install-and-configure/index.md b/_benchmark/user-guide/install-and-configure/index.md new file mode 100644 index 0000000000..c0a48278ad --- /dev/null +++ b/_benchmark/user-guide/install-and-configure/index.md @@ -0,0 +1,12 @@ +--- +layout: default +title: Install and configure +nav_order: 5 +parent: User guide +has_children: true +--- + +# Installing and configuring OpenSearch Benchmark + +This section details how to install and configure OpenSearch Benchmark. + diff --git a/_benchmark/user-guide/installing-benchmark.md b/_benchmark/user-guide/install-and-configure/installing-benchmark.md similarity index 98% rename from _benchmark/user-guide/installing-benchmark.md rename to _benchmark/user-guide/install-and-configure/installing-benchmark.md index 8383cfb2f9..1dd30f9180 100644 --- a/_benchmark/user-guide/installing-benchmark.md +++ b/_benchmark/user-guide/install-and-configure/installing-benchmark.md @@ -1,10 +1,12 @@ --- layout: default -title: Installing OpenSearch Benchmark +title: Installing nav_order: 5 -parent: User guide +grand_parent: User guide +parent: Install and configure redirect_from: - /benchmark/installing-benchmark/ + - /benchmark/user-guide/installing-benchmark/ --- # Installing OpenSearch Benchmark diff --git a/_benchmark/user-guide/distributed-load.md b/_benchmark/user-guide/optimizing-benchmarks/distributed-load.md similarity index 98% rename from _benchmark/user-guide/distributed-load.md rename to _benchmark/user-guide/optimizing-benchmarks/distributed-load.md index f16de29f88..9729fe4362 100644 --- a/_benchmark/user-guide/distributed-load.md +++ b/_benchmark/user-guide/optimizing-benchmarks/distributed-load.md @@ -2,12 +2,12 @@ layout: default title: Running distributed loads nav_order: 15 -parent: User guide +parent: Optimizing benchmarks +grand_parent: User guide --- # Running distributed loads - OpenSearch Benchmark loads always run on the same machine on which the benchmark was started. However, you can use multiple load drivers to generate additional benchmark testing loads, particularly for large clusters on multiple machines. This tutorial describes how to distribute benchmark loads across multiple machines in a single cluster. ## System architecture diff --git a/_benchmark/user-guide/optimizing-benchmarks/index.md b/_benchmark/user-guide/optimizing-benchmarks/index.md new file mode 100644 index 0000000000..0ea6c1978e --- /dev/null +++ b/_benchmark/user-guide/optimizing-benchmarks/index.md @@ -0,0 +1,11 @@ +--- +layout: default +title: Optimizing benchmarks +nav_order: 25 +parent: User guide +has_children: true +--- + +# Optimizing benchmarks + +This section details different ways you can optimize the benchmark tools for your cluster. \ No newline at end of file diff --git a/_benchmark/user-guide/target-throughput.md b/_benchmark/user-guide/optimizing-benchmarks/target-throughput.md similarity index 97% rename from _benchmark/user-guide/target-throughput.md rename to _benchmark/user-guide/optimizing-benchmarks/target-throughput.md index 1be0a8be39..b6c55f96c5 100644 --- a/_benchmark/user-guide/target-throughput.md +++ b/_benchmark/user-guide/optimizing-benchmarks/target-throughput.md @@ -2,7 +2,10 @@ layout: default title: Target throughput nav_order: 150 -parent: User guide +parent: Optimizing benchmarks +grand_parent: User guide +redirect_from: + - /benchmark/user-guide/target-throughput/ --- # Target throughput diff --git a/_benchmark/user-guide/telemetry.md b/_benchmark/user-guide/telemetry.md deleted file mode 100644 index d4c40c790a..0000000000 --- a/_benchmark/user-guide/telemetry.md +++ /dev/null @@ -1,8 +0,0 @@ ---- -layout: default -title: Enabling telemetry devices -nav_order: 30 -parent: User guide ---- - -Telemetry results will not appear in the summary report. To visualize telemetry results, ingest the data into OpenSearch and visualize the data in OpenSearch Dashboards. \ No newline at end of file diff --git a/_benchmark/user-guide/understanding-results/index.md b/_benchmark/user-guide/understanding-results/index.md new file mode 100644 index 0000000000..2122aa0e2e --- /dev/null +++ b/_benchmark/user-guide/understanding-results/index.md @@ -0,0 +1,12 @@ +--- +layout: default +title: Understanding results +nav_order: 20 +parent: User guide +has_children: true +--- + +After a [running a workload]({{site.url}}{{site.baseurl}}/benchmark/user-guide/working-with-workloads/running-workloads/), OpenSearch Benchmark produces a series of metrics. The following pages details: + +- [How metrics are reported]({{site.url}}{{site.baseurl}}/benchmark/user-guide/understanding-results/summary-reports/) +- [How to visualize metrics]({{site.url}}{{site.baseurl}}/benchmark/user-guide/understanding-results/telemetry/) \ No newline at end of file diff --git a/_benchmark/user-guide/understanding-results.md b/_benchmark/user-guide/understanding-results/summary-reports.md similarity index 98% rename from _benchmark/user-guide/understanding-results.md rename to _benchmark/user-guide/understanding-results/summary-reports.md index 5b8935a8c7..28578c8c89 100644 --- a/_benchmark/user-guide/understanding-results.md +++ b/_benchmark/user-guide/understanding-results/summary-reports.md @@ -1,10 +1,14 @@ --- layout: default -title: Understanding benchmark results +title: Summary reports nav_order: 22 -parent: User guide +grand_parent: User guide +parent: Understanding results +redirect_from: + - /benchmark/user-guide/understanding-results/ --- +# Understanding the summary report At the end of each test run, OpenSearch Benchmark creates a summary of test result metrics like service time, throughput, latency, and more. These metrics provide insights into how the selected workload performed on a benchmarked OpenSearch cluster. diff --git a/_benchmark/user-guide/understanding-results/telemetry.md b/_benchmark/user-guide/understanding-results/telemetry.md new file mode 100644 index 0000000000..3548dd4456 --- /dev/null +++ b/_benchmark/user-guide/understanding-results/telemetry.md @@ -0,0 +1,21 @@ +--- +layout: default +title: Enabling telemetry devices +nav_order: 30 +grand_parent: User guide +parent: Understanding results +redirect_from: + - /benchmark/user-guide/telemetry +--- + +# Enabling telemetry devices + +Telemetry results will not appear in the summary report. To visualize telemetry results, ingest the data into OpenSearch and visualize the data in OpenSearch Dashboards. + +To view a list of the available telemetry devices, use the command `opensearch-benchmark list telemetry`. After you've selected a [supported telemetry device]({{site.url}}{{site.baseurl}}/benchmark/reference/telemetry/), you can activate the device when running a tests with the `--telemetry` command flag. For example, if you want to use the `jfr` device with the `geonames` workload, enter the following command: + +```json +opensearch-benchmark workload --workload=geonames --telemetry=jfr +``` +{% include copy-curl.html %} + diff --git a/_benchmark/user-guide/understanding-workloads/index.md b/_benchmark/user-guide/understanding-workloads/index.md index 844b565185..6e6d2aa9c1 100644 --- a/_benchmark/user-guide/understanding-workloads/index.md +++ b/_benchmark/user-guide/understanding-workloads/index.md @@ -1,7 +1,7 @@ --- layout: default title: Understanding workloads -nav_order: 7 +nav_order: 10 parent: User guide has_children: true --- diff --git a/_benchmark/user-guide/contributing-workloads.md b/_benchmark/user-guide/working-with-workloads/contributing-workloads.md similarity index 97% rename from _benchmark/user-guide/contributing-workloads.md rename to _benchmark/user-guide/working-with-workloads/contributing-workloads.md index e60f60eaed..74524f36cb 100644 --- a/_benchmark/user-guide/contributing-workloads.md +++ b/_benchmark/user-guide/working-with-workloads/contributing-workloads.md @@ -2,7 +2,10 @@ layout: default title: Sharing custom workloads nav_order: 11 -parent: User guide +grand_parent: User guide +parent: Working with workloads +redirect_from: + - /benchmark/user-guide/contributing-workloads/ --- # Sharing custom workloads diff --git a/_benchmark/user-guide/creating-custom-workloads.md b/_benchmark/user-guide/working-with-workloads/creating-custom-workloads.md similarity index 99% rename from _benchmark/user-guide/creating-custom-workloads.md rename to _benchmark/user-guide/working-with-workloads/creating-custom-workloads.md index 6effa9a265..a239c94249 100644 --- a/_benchmark/user-guide/creating-custom-workloads.md +++ b/_benchmark/user-guide/working-with-workloads/creating-custom-workloads.md @@ -2,7 +2,8 @@ layout: default title: Creating custom workloads nav_order: 10 -parent: User guide +grand_parent: User guide +parent: Working with workloads redirect_from: - /benchmark/user-guide/creating-custom-workloads/ - /benchmark/creating-custom-workloads/ diff --git a/_benchmark/user-guide/finetine-workloads.md b/_benchmark/user-guide/working-with-workloads/finetune-workloads.md similarity index 97% rename from _benchmark/user-guide/finetine-workloads.md rename to _benchmark/user-guide/working-with-workloads/finetune-workloads.md index 4fc0a284db..d150247ad8 100644 --- a/_benchmark/user-guide/finetine-workloads.md +++ b/_benchmark/user-guide/working-with-workloads/finetune-workloads.md @@ -2,7 +2,10 @@ layout: default title: Fine-tuning custom workloads nav_order: 12 -parent: User guide +grand_parent: User guide +parent: Working with workloads +redirect_from: + - /benchmark/user-guide/finetine-workloads/ --- # Fine-tuning custom workloads diff --git a/_benchmark/user-guide/working-with-workloads/index.md b/_benchmark/user-guide/working-with-workloads/index.md new file mode 100644 index 0000000000..a6acb86b4b --- /dev/null +++ b/_benchmark/user-guide/working-with-workloads/index.md @@ -0,0 +1,16 @@ +--- +layout: default +title: Working with workloads +nav_order: 15 +parent: User guide +has_children: true +--- + +# Working with workloads + +Once you [understand workloads]({{site.url}}{{site.baseurl}}/benchmark/user-guide/understanding-workloads/index/) and have [chosen a workload]({{site.url}}{{site.baseurl}}/benchmark/user-guide/understanding-workloads/choosing-a-workload/) to run your benchmarks with, you can begin working with workloads. + +- [Running workloads]({{site.url}}{{site.baseurl}}/benchmark/user-guide/working-with-workloads/running-workloads/): Learn how to run an OpenSearch Benchmark workload. +- [Creating custom workloads]({{site.url}}{{site.baseurl}}/benchmark/user-guide/working-with-workloads/creating-custom-workloads/): Create a custom workload with your own datasets. +- [Fine-tuning workloads]({{site.url}}{{site.baseurl}}/benchmark/user-guide/working-with-workloads/finetune-workloads/): Fine-tune your custom workload according to the needs of your cluster. +- [Contributing workloads]({{site.url}}{{site.baseurl}}/benchmark/user-guide/working-with-workloads/contributing-workloads/): Contribute your custom workload for the OpenSearch community to use. \ No newline at end of file diff --git a/_benchmark/user-guide/running-workloads.md b/_benchmark/user-guide/working-with-workloads/running-workloads.md similarity index 99% rename from _benchmark/user-guide/running-workloads.md rename to _benchmark/user-guide/working-with-workloads/running-workloads.md index 36108eb9c8..534d61f6b9 100644 --- a/_benchmark/user-guide/running-workloads.md +++ b/_benchmark/user-guide/working-with-workloads/running-workloads.md @@ -2,7 +2,10 @@ layout: default title: Running a workload nav_order: 9 -parent: User guide +grand_parent: User guide +parent: Working with workloads +redirect_from: + - /benchmark/user-guide/running-workloads/ --- # Running a workload From 3f77141d64f8517aa43eecf2ef7933197616c54e Mon Sep 17 00:00:00 2001 From: kolchfa-aws <105444904+kolchfa-aws@users.noreply.github.com> Date: Tue, 8 Oct 2024 16:43:23 -0400 Subject: [PATCH 109/111] Language analyzers small refactor (#8479) Signed-off-by: kolchfa-aws <105444904+kolchfa-aws@users.noreply.github.com> --- _analyzers/language-analyzers.md | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/_analyzers/language-analyzers.md b/_analyzers/language-analyzers.md index 6b5834530e..ca4ba320dd 100644 --- a/_analyzers/language-analyzers.md +++ b/_analyzers/language-analyzers.md @@ -7,9 +7,9 @@ redirect_from: - /query-dsl/analyzers/language-analyzers/ --- -# Language analyzer +# Language analyzers -OpenSearch supports the following language values with the `analyzer` option: +OpenSearch supports the following language analyzers: `arabic`, `armenian`, `basque`, `bengali`, `brazilian`, `bulgarian`, `catalan`, `czech`, `danish`, `dutch`, `english`, `estonian`, `finnish`, `french`, `galician`, `german`, `greek`, `hindi`, `hungarian`, `indonesian`, `irish`, `italian`, `latvian`, `lithuanian`, `norwegian`, `persian`, `portuguese`, `romanian`, `russian`, `sorani`, `spanish`, `swedish`, `turkish`, and `thai`. To use the analyzer when you map an index, specify the value within your query. For example, to map your index with the French language analyzer, specify the `french` value for the analyzer field: @@ -41,4 +41,4 @@ PUT my-index } ``` - \ No newline at end of file + From 5f53f5b8492fb4c8af721e945a59156537dbaed6 Mon Sep 17 00:00:00 2001 From: wieso-itzi <85185077+wieso-itzi@users.noreply.github.com> Date: Wed, 9 Oct 2024 16:28:01 +0200 Subject: [PATCH 110/111] fix typo in viz-index.md (#8483) Signed-off-by: wieso-itzi <85185077+wieso-itzi@users.noreply.github.com> --- _dashboards/visualize/viz-index.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/_dashboards/visualize/viz-index.md b/_dashboards/visualize/viz-index.md index 75407a6ba5..4bde79d2cc 100644 --- a/_dashboards/visualize/viz-index.md +++ b/_dashboards/visualize/viz-index.md @@ -81,7 +81,7 @@ Region maps show patterns and trends across geographic locations. A region map i ### Markdown -Markdown is a the markup language used in Dashboards to provide context to your data visualizations. Using Markdown, you can display information and instructions along with the visualization. +Markdown is the markup language used in Dashboards to provide context to your data visualizations. Using Markdown, you can display information and instructions along with the visualization. Example coordinate map in OpenSearch Dashboards From cd31d8274d6aea688a74c2843ea6304997e55276 Mon Sep 17 00:00:00 2001 From: AntonEliatra Date: Wed, 9 Oct 2024 15:33:32 +0100 Subject: [PATCH 111/111] Add Classic token filter docs (#7918) * adding classic token filter docs #7876 Signed-off-by: AntonEliatra * Updating details as per comments Signed-off-by: AntonEliatra * Update classic.md Signed-off-by: AntonEliatra * Update classic.md Signed-off-by: AntonEliatra * Update classic.md Signed-off-by: AntonEliatra * Update _analyzers/token-filters/classic.md Signed-off-by: kolchfa-aws <105444904+kolchfa-aws@users.noreply.github.com> * Apply suggestions from code review Co-authored-by: kolchfa-aws <105444904+kolchfa-aws@users.noreply.github.com> Co-authored-by: Nathan Bower Signed-off-by: AntonEliatra --------- Signed-off-by: AntonEliatra Signed-off-by: kolchfa-aws <105444904+kolchfa-aws@users.noreply.github.com> Co-authored-by: kolchfa-aws <105444904+kolchfa-aws@users.noreply.github.com> Co-authored-by: Nathan Bower --- _analyzers/token-filters/classic.md | 93 +++++++++++++++++++++++++++++ _analyzers/token-filters/index.md | 2 +- 2 files changed, 94 insertions(+), 1 deletion(-) create mode 100644 _analyzers/token-filters/classic.md diff --git a/_analyzers/token-filters/classic.md b/_analyzers/token-filters/classic.md new file mode 100644 index 0000000000..34db74a824 --- /dev/null +++ b/_analyzers/token-filters/classic.md @@ -0,0 +1,93 @@ +--- +layout: default +title: Classic +parent: Token filters +nav_order: 50 +--- + +# Classic token filter + +The primary function of the classic token filter is to work alongside the classic tokenizer. It processes tokens by applying the following common transformations, which aid in text analysis and search: + - Removal of possessive endings such as *'s*. For example, *John's* becomes *John*. + - Removal of periods from acronyms. For example, *D.A.R.P.A.* becomes *DARPA*. + + +## Example + +The following example request creates a new index named `custom_classic_filter` and configures an analyzer with the `classic` filter: + +```json +PUT /custom_classic_filter +{ + "settings": { + "analysis": { + "analyzer": { + "custom_classic": { + "type": "custom", + "tokenizer": "classic", + "filter": ["classic"] + } + } + } + } +} +``` +{% include copy-curl.html %} + +## Generated tokens + +Use the following request to examine the tokens generated using the analyzer: + +```json +POST /custom_classic_filter/_analyze +{ + "analyzer": "custom_classic", + "text": "John's co-operate was excellent." +} +``` +{% include copy-curl.html %} + +The response contains the generated tokens: + +```json +{ + "tokens": [ + { + "token": "John", + "start_offset": 0, + "end_offset": 6, + "type": "", + "position": 0 + }, + { + "token": "co", + "start_offset": 7, + "end_offset": 9, + "type": "", + "position": 1 + }, + { + "token": "operate", + "start_offset": 10, + "end_offset": 17, + "type": "", + "position": 2 + }, + { + "token": "was", + "start_offset": 18, + "end_offset": 21, + "type": "", + "position": 3 + }, + { + "token": "excellent", + "start_offset": 22, + "end_offset": 31, + "type": "", + "position": 4 + } + ] +} +``` + diff --git a/_analyzers/token-filters/index.md b/_analyzers/token-filters/index.md index d78ffb42a0..7d5814e06b 100644 --- a/_analyzers/token-filters/index.md +++ b/_analyzers/token-filters/index.md @@ -19,7 +19,7 @@ Token filter | Underlying Lucene token filter| Description [`asciifolding`]({{site.url}}{{site.baseurl}}/analyzers/token-filters/asciifolding/) | [ASCIIFoldingFilter](https://lucene.apache.org/core/9_10_0/analysis/common/org/apache/lucene/analysis/miscellaneous/ASCIIFoldingFilter.html) | Converts alphabetic, numeric, and symbolic characters. `cjk_bigram` | [CJKBigramFilter](https://lucene.apache.org/core/9_10_0/analysis/common/org/apache/lucene/analysis/cjk/CJKBigramFilter.html) | Forms bigrams of Chinese, Japanese, and Korean (CJK) tokens. [`cjk_width`]({{site.url}}{{site.baseurl}}/analyzers/token-filters/cjk-width/) | [CJKWidthFilter](https://lucene.apache.org/core/9_10_0/analysis/common/org/apache/lucene/analysis/cjk/CJKWidthFilter.html) | Normalizes Chinese, Japanese, and Korean (CJK) tokens according to the following rules:
- Folds full-width ASCII character variants into their equivalent basic Latin characters.
- Folds half-width katakana character variants into their equivalent kana characters. -`classic` | [ClassicFilter](https://lucene.apache.org/core/9_10_0/analysis/common/org/apache/lucene/analysis/classic/ClassicFilter.html) | Performs optional post-processing on the tokens generated by the classic tokenizer. Removes possessives (`'s`) and removes `.` from acronyms. +[`classic`]({{site.url}}{{site.baseurl}}/analyzers/token-filters/classic) | [ClassicFilter](https://lucene.apache.org/core/9_10_0/analysis/common/org/apache/lucene/analysis/classic/ClassicFilter.html) | Performs optional post-processing on the tokens generated by the classic tokenizer. Removes possessives (`'s`) and removes `.` from acronyms. `common_grams` | [CommonGramsFilter](https://lucene.apache.org/core/9_10_0/analysis/common/org/apache/lucene/analysis/commongrams/CommonGramsFilter.html) | Generates bigrams for a list of frequently occurring terms. The output contains both single terms and bigrams. `conditional` | [ConditionalTokenFilter](https://lucene.apache.org/core/9_10_0/analysis/common/org/apache/lucene/analysis/miscellaneous/ConditionalTokenFilter.html) | Applies an ordered list of token filters to tokens that match the conditions provided in a script. `decimal_digit` | [DecimalDigitFilter](https://lucene.apache.org/core/9_10_0/analysis/common/org/apache/lucene/analysis/core/DecimalDigitFilter.html) | Converts all digits in the Unicode decimal number general category to basic Latin digits (0--9).