Skip to content

Commit

Permalink
Merge branch 'elastic:main' into ensureUniqueModelIds
Browse files Browse the repository at this point in the history
  • Loading branch information
maxhniebergall authored Jan 5, 2024
2 parents e265c04 + 36f08ea commit 063e128
Show file tree
Hide file tree
Showing 501 changed files with 6,166 additions and 4,197 deletions.
41 changes: 41 additions & 0 deletions .buildkite/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,41 @@
# Elasticsearch CI Pipelines

This directory contains pipeline definitions and scripts for running Elasticsearch CI on Buildkite.

## Directory Structure

- [pipelines](pipelines/) - pipeline definitions/yml
- [scripts](scripts/) - scripts used by pipelines, inside steps
- [hooks](hooks/) - [Buildkite hooks](https://buildkite.com/docs/agent/v3/hooks), where global env vars and secrets are set

## Pipeline Definitions

Pipelines are defined using YAML files residing in [pipelines](pipelines/). These are mostly static definitions that are used as-is, but there are a few dynamically-generated exceptions (see below).

### Dynamically Generated Pipelines

Pull request pipelines are generated dynamically based on labels, files changed, and other properties of pull requests.

Non-pull request pipelines that include BWC version matrices must also be generated whenever the [list of BWC versions](../.ci/bwcVersions) is updated.

#### Pull Request Pipelines

Pull request pipelines are generated dynamically at CI time based on numerous properties of the pull request. See [scripts/pull-request](scripts/pull-request) for details.

#### BWC Version Matrices

For pipelines that include BWC version matrices, you will see one or more template files (e.g. [periodic.template.yml](pipelines/periodic.template.yml)) and a corresponding generated file (e.g. [periodic.yml](pipelines/periodic.yml)). The generated file is the one that is actually used by Buildkite.

These files are updated by running:

```bash
./gradlew updateCIBwcVersions
```

This also runs automatically during release procedures.

You should always make changes to the template files, and run the above command to update the generated files.

## Node / TypeScript

Node (technically `bun`), TypeScript, and related files are currently used to generate pipelines for pull request CI. See [scripts/pull-request](scripts/pull-request) for details.
1 change: 0 additions & 1 deletion .buildkite/package.json
Original file line number Diff line number Diff line change
@@ -1,6 +1,5 @@
{
"name": "buildkite-pipelines",
"module": "index.ts",
"type": "module",
"devDependencies": {
"@types/node": "^20.6.0",
Expand Down
3 changes: 2 additions & 1 deletion .buildkite/pipelines/periodic.template.yml
Original file line number Diff line number Diff line change
Expand Up @@ -73,6 +73,7 @@ steps:
- openjdk19
- openjdk20
- openjdk21
- openjdk22
GRADLE_TASK:
- checkPart1
- checkPart2
Expand Down Expand Up @@ -180,7 +181,7 @@ steps:
image: family/elasticsearch-ubuntu-2004
machineType: n2-standard-8
buildDirectory: /dev/shm/bk
if: build.branch == "main" || build.branch =~ /^[0-9]+\.[0-9]+\$/
if: build.branch == "main" || build.branch == "7.17"
- label: Check branch consistency
command: .ci/scripts/run-gradle.sh branchConsistency
timeout_in_minutes: 15
Expand Down
1 change: 1 addition & 0 deletions .buildkite/pipelines/periodic.yml
Original file line number Diff line number Diff line change
Expand Up @@ -1194,6 +1194,7 @@ steps:
- openjdk19
- openjdk20
- openjdk21
- openjdk22
GRADLE_TASK:
- checkPart1
- checkPart2
Expand Down
68 changes: 60 additions & 8 deletions .buildkite/scripts/pull-request/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,12 +6,7 @@ Each time a pull request build is triggered, such as via commit or comment, we u

The generator handles the following:

- `allow-labels` - only trigger a step if the PR has one of these labels
- `skip-labels` - don't trigger the step if the PR has one of these labels
- `excluded-regions` - don't trigger the step if **all** of the changes in the PR match these paths/regexes
- `included-regions` - trigger the step if **all** of the changes in the PR match these paths/regexes
- `trigger-phrase` - trigger this step, and ignore all other steps, if the build was triggered by a comment and that comment matches this regex
- Note that each step has an automatic phrase of `.*run\\W+elasticsearch-ci/<step-name>.*`
- Various configurations for filtering/activating steps based on labels, changed files, etc. See below.
- Replacing `$SNAPSHOT_BWC_VERSIONS` in pipelines with an array of versions from `.ci/snapshotBwcVersions`
- Duplicating any step with `bwc_template: true` for each BWC version in `.ci/bwcVersions`

Expand All @@ -21,18 +16,75 @@ The generator handles the following:

Pipelines are in [`.buildkite/pipelines`](../../pipelines/pull-request). They are automatically picked up and given a name based on their filename.


## Setup

- [Install bun](https://bun.sh/)
- `npm install -g bun` will work if you already have `npm`
- `cd .buildkite; bun install` to install dependencies

## Run tests
## Testing

Testing the pipeline generator is done mostly using snapshot tests, which generate pipeline objects using the pipeline configurations in `mocks/pipelines` and then compare them to previously-generated snapshots in `__snapshots__` to confirm that they are correct.

The mock pipeline configurations should, therefore, try to cover all of the various features of the generator (allow-labels, skip-labels, etc).

Snapshots are generated/managed automatically whenever you create a new test that has a snapshot test condition. These are very similar to Jest snapshots.

### Run tests

```bash
cd .buildkite
bun test
```

If you need to regenerate the snapshots, run `bun test --update-snapshots`.

## Pipeline Configuration

The `config:` property at the top of pipelines inside `.buildkite/pipelines/pull-request` is a custom property used by our pipeline generator. It is not used by Buildkite.

All of the pipelines in this directory are evaluated whenever CI for a pull request is started, and the steps are filtered and combined into one pipeline based on the properties in `config:` and the state of the pull request.

The various configurations available mirror what we were using in our Jenkins pipelines.

### Config Properties

#### `allow-labels`

- Type: `string|string[]`
- Example: `["test-full-bwc"]`

Only trigger a step if the PR has one of these labels.

#### `skip-labels`

- Type: `string|string[]`
- Example: `>test-mute`

Don't trigger the step if the PR has one of these labels.

#### `excluded-regions`

- Type: `string|string[]` - must be JavaScript regexes
- Example: `["^docs/.*", "^x-pack/docs/.*"]`

Exclude the pipeline if all of the changed files in the PR match at least one regex. E.g. for the example above, don't run the step if all of the changed files are docs changes.

#### `included-regions`

- Type: `string|string[]` - must be JavaScript regexes
- Example: `["^docs/.*", "^x-pack/docs/.*"]`

Only include the pipeline if all of the changed files in the PR match at least one regex. E.g. for the example above, only run the step if all of the changed files are docs changes.

This is particularly useful for having a step that only runs, for example, when all of the other steps get filtered out because of the `excluded-regions` property.

#### `trigger-phrase`

- Type: `string` - must be a JavaScript regex
- Example: `"^run\\W+elasticsearch-ci/test-full-bwc.*"`
- Default: `.*run\\W+elasticsearch-ci/<step-name>.*` (`<step-name>` is generated from the filename of the yml file).

Trigger this step, and ignore all other steps, if the build was triggered by a comment and that comment matches this regex.

Note that the entire build itself is triggered via [`.buildkite/pull-requests.json`](../pull-requests.json). So, a comment has to first match the trigger configured there.
Original file line number Diff line number Diff line change
Expand Up @@ -43,6 +43,7 @@
import org.elasticsearch.core.IOUtils;
import org.elasticsearch.index.IndexVersion;
import org.elasticsearch.index.mapper.BlockLoader;
import org.elasticsearch.index.mapper.FieldNamesFieldMapper;
import org.elasticsearch.index.mapper.KeywordFieldMapper;
import org.elasticsearch.index.mapper.MappedFieldType;
import org.elasticsearch.index.mapper.NumberFieldMapper;
Expand Down Expand Up @@ -202,6 +203,11 @@ public Set<String> sourcePaths(String name) {
public String parentField(String field) {
throw new UnsupportedOperationException();
}

@Override
public FieldNamesFieldMapper.FieldNamesFieldType fieldNames() {
return FieldNamesFieldMapper.FieldNamesFieldType.get(true);
}
});
}
throw new IllegalArgumentException("can't read [" + name + "]");
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -108,7 +108,8 @@ private SourceToParse generateRandomDocument() {
if (random.nextBoolean()) {
continue;
}
String objFieldPrefix = Stream.generate(() -> "obj_field_" + idx).limit(objFieldDepth).collect(Collectors.joining("."));
int objFieldDepthActual = random.nextInt(1, objFieldDepth);
String objFieldPrefix = Stream.generate(() -> "obj_field_" + idx).limit(objFieldDepthActual).collect(Collectors.joining("."));
for (int j = 0; j < textFields; j++) {
if (random.nextBoolean()) {
StringBuilder fieldValueBuilder = generateTextField(fieldValueCountMax);
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -45,7 +45,7 @@ public static MapperService create(String mappings) {
.put("index.number_of_replicas", 0)
.put("index.number_of_shards", 1)
.put(IndexMetadata.SETTING_VERSION_CREATED, IndexVersion.current())
.put("index.mapping.total_fields.limit", 10000)
.put("index.mapping.total_fields.limit", 100000)
.build();
IndexMetadata meta = IndexMetadata.builder("index").settings(settings).build();
IndexSettings indexSettings = new IndexSettings(meta, settings);
Expand Down
2 changes: 1 addition & 1 deletion build-tools-internal/version.properties
Original file line number Diff line number Diff line change
Expand Up @@ -36,7 +36,7 @@ protobuf = 3.21.9

# test dependencies
randomizedrunner = 2.8.0
junit = 4.12
junit = 4.13.2
junit5 = 5.7.1
hamcrest = 2.1
mocksocket = 1.2
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,6 @@
*/
package org.elasticsearch.plugin.noop.action.search;

import org.apache.lucene.search.TotalHits;
import org.elasticsearch.action.ActionListener;
import org.elasticsearch.action.search.SearchRequest;
import org.elasticsearch.action.search.SearchResponse;
Expand All @@ -18,7 +17,6 @@
import org.elasticsearch.common.io.stream.Writeable;
import org.elasticsearch.common.util.concurrent.EsExecutors;
import org.elasticsearch.plugin.noop.NoopPlugin;
import org.elasticsearch.search.SearchHit;
import org.elasticsearch.search.SearchHits;
import org.elasticsearch.search.aggregations.InternalAggregations;
import org.elasticsearch.search.profile.SearchProfileResults;
Expand All @@ -44,7 +42,7 @@ public TransportNoopSearchAction(TransportService transportService, ActionFilter
protected void doExecute(Task task, SearchRequest request, ActionListener<SearchResponse> listener) {
listener.onResponse(
new SearchResponse(
new SearchHits(new SearchHit[0], new TotalHits(0L, TotalHits.Relation.EQUAL_TO), 0.0f),
SearchHits.EMPTY_WITH_TOTAL_HITS,
InternalAggregations.EMPTY,
new Suggest(Collections.emptyList()),
false,
Expand Down
5 changes: 5 additions & 0 deletions docs/changelog/101487.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
pr: 101487
summary: Wait for async searches to finish when shutting down
area: Infra/Node Lifecycle
type: enhancement
issues: []
6 changes: 6 additions & 0 deletions docs/changelog/102207.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,6 @@
pr: 102207
summary: Fix disk computation when initializing unassigned shards in desired balance
computation
area: Allocation
type: bug
issues: []
5 changes: 5 additions & 0 deletions docs/changelog/103232.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
pr: 103232
summary: "Remove leniency in msearch parsing"
area: Search
type: enhancement
issues: []
5 changes: 5 additions & 0 deletions docs/changelog/103453.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
pr: 103453
summary: Add expiration time to update api key api
area: Security
type: enhancement
issues: []
5 changes: 5 additions & 0 deletions docs/changelog/103632.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
pr: 103632
summary: "ESQL: Check field exists before load from `_source`"
area: ES|QL
type: enhancement
issues: []
5 changes: 5 additions & 0 deletions docs/changelog/103865.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
pr: 103865
summary: Revert change
area: Mapping
type: bug
issues: []
5 changes: 5 additions & 0 deletions docs/changelog/103873.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
pr: 103873
summary: Catch exceptions during `pytorch_inference` startup
area: Machine Learning
type: bug
issues: []
14 changes: 14 additions & 0 deletions docs/changelog/103898.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,14 @@
pr: 103898
summary: Change `index.look_ahead_time` index setting's default value from 2 hours to 30 minutes.
area: TSDB
type: breaking
issues: []
breaking:
title: Change `index.look_ahead_time` index setting's default value from 2 hours to 30 minutes.
area: Index setting
details: Lower the `index.look_ahead_time` index setting's max value from 2 hours to 30 minutes.
impact: >
Documents with @timestamp of 30 minutes or more in the future will be rejected.
Before documents with @timestamp of 2 hours or more in the future were rejected.
If the previous behaviour should be kept, then update the `index.look_ahead_time` setting to two hours before performing the upgrade.
notable: false
5 changes: 5 additions & 0 deletions docs/changelog/103923.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
pr: 103923
summary: Preserve response headers in Datafeed preview
area: Machine Learning
type: bug
issues: []
11 changes: 8 additions & 3 deletions docs/reference/rest-api/security/bulk-update-api-keys.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -30,7 +30,7 @@ This operation can greatly improve performance over making individual updates.

It's not possible to update expired or <<security-api-invalidate-api-key,invalidated>> API keys.

This API supports updates to API key access scope and metadata.
This API supports updates to API key access scope, metadata and expiration.
The access scope of each API key is derived from the <<security-api-bulk-update-api-keys-api-key-role-descriptors,`role_descriptors`>> you specify in the request, and a snapshot of the owner user's permissions at the time of the request.
The snapshot of the owner's permissions is updated automatically on every call.

Expand Down Expand Up @@ -63,6 +63,9 @@ The structure of a role descriptor is the same as the request for the <<api-key-
Within the `metadata` object, top-level keys beginning with an underscore (`_`) are reserved for system usage.
Any information specified with this parameter fully replaces metadata previously associated with the API key.

`expiration`::
(Optional, string) Expiration time for the API keys. By default, API keys never expire. Can be omitted to leave unchanged.

[[security-api-bulk-update-api-keys-response-body]]
==== {api-response-body-title}

Expand Down Expand Up @@ -166,7 +169,8 @@ Further, assume that the owner user's permissions are:
--------------------------------------------------
// NOTCONSOLE

The following example updates the API keys created above, assigning them new role descriptors and metadata.
The following example updates the API keys created above, assigning them new role descriptors, metadata and updates
their expiration time.

[source,console]
----
Expand All @@ -192,7 +196,8 @@ POST /_security/api_key/_bulk_update
"trusted": true,
"tags": ["production"]
}
}
},
"expiration": "30d"
}
----
// TEST[skip:api key ids not available]
Expand Down
5 changes: 4 additions & 1 deletion docs/reference/rest-api/security/update-api-key.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -30,7 +30,7 @@ If you need to apply the same update to many API keys, you can use <<security-ap

It's not possible to update expired API keys, or API keys that have been invalidated by <<security-api-invalidate-api-key,invalidate API Key>>.

This API supports updates to an API key's access scope and metadata.
This API supports updates to an API key's access scope, metadata and expiration.
The access scope of an API key is derived from the <<security-api-update-api-key-api-key-role-descriptors,`role_descriptors`>> you specify in the request, and a snapshot of the owner user's permissions at the time of the request.
The snapshot of the owner's permissions is updated automatically on every call.

Expand Down Expand Up @@ -67,6 +67,9 @@ It supports nested data structure.
Within the `metadata` object, top-level keys beginning with `_` are reserved for system usage.
When specified, this fully replaces metadata previously associated with the API key.

`expiration`::
(Optional, string) Expiration time for the API key. By default, API keys never expire. Can be omitted to leave unchanged.

[[security-api-update-api-key-response-body]]
==== {api-response-body-title}

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -34,7 +34,7 @@ Use this API to update cross-cluster API keys created by the <<security-api-crea
It's not possible to update expired API keys, or API keys that have been invalidated by
<<security-api-invalidate-api-key,invalidate API Key>>.

This API supports updates to an API key's access scope and metadata.
This API supports updates to an API key's access scope, metadata and expiration.
The owner user's information, e.g. `username`, `realm`, is also updated automatically on every call.

NOTE: This API cannot update <<security-api-create-api-key,REST API keys>>, which should be updated by
Expand Down Expand Up @@ -66,6 +66,9 @@ It supports nested data structure.
Within the `metadata` object, top-level keys beginning with `_` are reserved for system usage.
When specified, this fully replaces metadata previously associated with the API key.

`expiration`::
(Optional, string) Expiration time for the API key. By default, API keys never expire. Can be omitted to leave unchanged.

[[security-api-update-cross-cluster-api-key-response-body]]
==== {api-response-body-title}

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -58,9 +58,9 @@ DELETE /_enrich/policy/clientip_policy

// tag::demo-env[]

On the demo environment at https://esql.demo.elastic.co/[esql.demo.elastic.co],
On the demo environment at https://ela.st/ql/[ela.st/ql],
an enrich policy called `clientip_policy` has already been created an executed.
The policy links an IP address to an environment ("Development", "QA", or
"Production")
"Production").

// end::demo-env[]
Original file line number Diff line number Diff line change
Expand Up @@ -43,6 +43,6 @@ PUT sample_data/_bulk

The data set used in this guide has been preloaded into the Elastic {esql}
public demo environment. Visit
https://esql.demo.elastic.co/[esql.demo.elastic.co] to start using it.
https://ela.st/ql[ela.st/ql] to start using it.

// end::demo-env[]
Loading

0 comments on commit 063e128

Please sign in to comment.