Skip to content

Commit

Permalink
Merge remote-tracking branch 'origin/main' into lucene_snapshot
Browse files Browse the repository at this point in the history
  • Loading branch information
elasticsearchmachine committed Aug 5, 2024
2 parents 1698430 + f352418 commit 0b8278a
Show file tree
Hide file tree
Showing 82 changed files with 362 additions and 616 deletions.
158 changes: 113 additions & 45 deletions README.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -33,76 +33,144 @@ https://www.elastic.co/downloads/elasticsearch[elastic.co/downloads/elasticsearc
=== Run Elasticsearch locally

////
IMPORTANT: This content is replicated in the Elasticsearch guide.
If you make changes, you must also update setup/set-up-local-dev-deployment.asciidoc.
IMPORTANT: This content is replicated in the Elasticsearch guide. See `run-elasticsearch-locally.asciidoc`.
Both will soon be replaced by a quickstart script.
////

To try out Elasticsearch on your own machine, we recommend using Docker
and running both Elasticsearch and Kibana.
Docker images are available from the https://www.docker.elastic.co[Elastic Docker registry].
[WARNING]
====
DO NOT USE THESE INSTRUCTIONS FOR PRODUCTION DEPLOYMENTS.
NOTE: Starting in Elasticsearch 8.0, security is enabled by default.
The first time you start Elasticsearch, TLS encryption is configured automatically,
a password is generated for the `elastic` user,
and a Kibana enrollment token is created so you can connect Kibana to your secured cluster.
This setup is intended for local development and testing only.
====

For other installation options, see the
https://www.elastic.co/guide/en/elasticsearch/reference/current/install-elasticsearch.html[Elasticsearch installation documentation].
The following commands help you very quickly spin up a single-node Elasticsearch cluster, together with Kibana in Docker.
Use this setup for local development or testing.

**Start Elasticsearch**
==== Prerequisites

. Install and start https://www.docker.com/products/docker-desktop[Docker
Desktop]. Go to **Preferences > Resources > Advanced** and set Memory to at least 4GB.
If you don't have Docker installed, https://www.docker.com/products/docker-desktop[download and install Docker Desktop] for your operating system.

. Start an Elasticsearch container:
+
==== Set environment variables

Configure the following environment variables.

[source,sh]
----
export ELASTIC_PASSWORD="<ES_PASSWORD>" # password for "elastic" username
export KIBANA_PASSWORD="<KIB_PASSWORD>" # Used internally by Kibana, must be at least 6 characters long
----

==== Create a Docker network

To run both Elasticsearch and Kibana, you'll need to create a Docker network:

[source,sh]
----
docker network create elastic
docker pull docker.elastic.co/elasticsearch/elasticsearch:{version} <1>
docker run --name elasticsearch --net elastic -p 9200:9200 -p 9300:9300 -e "discovery.type=single-node" -t docker.elastic.co/elasticsearch/elasticsearch:{version}
docker network create elastic-net
----
<1> Replace {version} with the version of Elasticsearch you want to run.
+
When you start Elasticsearch for the first time, the generated `elastic` user password and
Kibana enrollment token are output to the terminal.
+
NOTE: You might need to scroll back a bit in the terminal to view the password
and enrollment token.

. Copy the generated password and enrollment token and save them in a secure
location. These values are shown only when you start Elasticsearch for the first time.
You'll use these to enroll Kibana with your Elasticsearch cluster and log in.
==== Run Elasticsearch

Start the Elasticsearch container with the following command:

**Start Kibana**
[source,sh]
----
docker run -p 127.0.0.1:9200:9200 -d --name elasticsearch --network elastic-net \
-e ELASTIC_PASSWORD=$ELASTIC_PASSWORD \
-e "discovery.type=single-node" \
-e "xpack.security.http.ssl.enabled=false" \
-e "xpack.license.self_generated.type=trial" \
docker.elastic.co/elasticsearch/elasticsearch:{version}
----

Kibana enables you to easily send requests to Elasticsearch and analyze, visualize, and manage data interactively.
==== Run Kibana (optional)

. In a new terminal session, start Kibana and connect it to your Elasticsearch container:
+
To run Kibana, you must first set the `kibana_system` password in the Elasticsearch container.

[source,sh]
----
docker pull docker.elastic.co/kibana/kibana:{version} <1>
docker run --name kibana --net elastic -p 5601:5601 docker.elastic.co/kibana/kibana:{version}
# configure the Kibana password in the ES container
curl -u elastic:$ELASTIC_PASSWORD \
-X POST \
http://localhost:9200/_security/user/kibana_system/_password \
-d '{"password":"'"$KIBANA_PASSWORD"'"}' \
-H 'Content-Type: application/json'
----
<1> Replace {version} with the version of Kibana you want to run.
+
When you start Kibana, a unique URL is output to your terminal.
// NOTCONSOLE

. To access Kibana, open the generated URL in your browser.
Start the Kibana container with the following command:

.. Paste the enrollment token that you copied when starting
Elasticsearch and click the button to connect your Kibana instance with Elasticsearch.
[source,sh]
----
docker run -p 127.0.0.1:5601:5601 -d --name kibana --network elastic-net \
-e ELASTICSEARCH_URL=http://elasticsearch:9200 \
-e ELASTICSEARCH_HOSTS=http://elasticsearch:9200 \
-e ELASTICSEARCH_USERNAME=kibana_system \
-e ELASTICSEARCH_PASSWORD=$KIBANA_PASSWORD \
-e "xpack.security.enabled=false" \
-e "xpack.license.self_generated.type=trial" \
docker.elastic.co/kibana/kibana:{version}
----

.. Log in to Kibana as the `elastic` user with the password that was generated
when you started Elasticsearch.
.Trial license
[%collapsible]
====
The service is started with a trial license. The trial license enables all features of Elasticsearch for a trial period of 30 days. After the trial period expires, the license is downgraded to a basic license, which is free forever. If you prefer to skip the trial and use the basic license, set the value of the `xpack.license.self_generated.type` variable to basic instead. For a detailed feature comparison between the different licenses, refer to our https://www.elastic.co/subscriptions[subscriptions page].
====

**Send requests to Elasticsearch**
==== Send requests to Elasticsearch

You send data and other requests to Elasticsearch through REST APIs.
You can interact with Elasticsearch using any client that sends HTTP requests,
such as the https://www.elastic.co/guide/en/elasticsearch/client/index.html[Elasticsearch
language clients] and https://curl.se[curl].

===== Using curl

Here's an example curl command to create a new Elasticsearch index, using basic auth:

[source,sh]
----
curl -u elastic:$ELASTIC_PASSWORD \
-X PUT \
http://localhost:9200/my-new-index \
-H 'Content-Type: application/json'
----
// NOTCONSOLE

===== Using a language client

To connect to your local dev Elasticsearch cluster with a language client, you can use basic authentication with the `elastic` username and the password you set in the environment variable.

You'll use the following connection details:

* **Elasticsearch endpoint**: `http://localhost:9200`
* **Username**: `elastic`
* **Password**: `$ELASTIC_PASSWORD` (Value you set in the environment variable)

For example, to connect with the Python `elasticsearch` client:

[source,python]
----
import os
from elasticsearch import Elasticsearch
username = 'elastic'
password = os.getenv('ELASTIC_PASSWORD') # Value you set in the environment variable
client = Elasticsearch(
"http://localhost:9200",
basic_auth=(username, password)
)
print(client.info())
----

===== Using the Dev Tools Console

Kibana's developer console provides an easy way to experiment and test requests.
To access the console, go to **Management > Dev Tools**.
To access the console, open Kibana, then go to **Management** > **Dev Tools**.

**Add data**

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -106,8 +106,12 @@ public static KeyStore filter(KeyStore store, Predicate<KeyStoreEntry> filter) {
* @param certificates The root certificates to trust
*/
public static KeyStore buildTrustStore(Iterable<Certificate> certificates) throws GeneralSecurityException {
return buildTrustStore(certificates, KeyStore.getDefaultType());
}

public static KeyStore buildTrustStore(Iterable<Certificate> certificates, String type) throws GeneralSecurityException {
assert certificates != null : "Cannot create keystore with null certificates";
KeyStore store = buildNewKeyStore();
KeyStore store = buildNewKeyStore(type);
int counter = 0;
for (Certificate certificate : certificates) {
store.setCertificateEntry("cert-" + counter, certificate);
Expand All @@ -117,7 +121,11 @@ public static KeyStore buildTrustStore(Iterable<Certificate> certificates) throw
}

private static KeyStore buildNewKeyStore() throws GeneralSecurityException {
KeyStore keyStore = KeyStore.getInstance(KeyStore.getDefaultType());
return buildNewKeyStore(KeyStore.getDefaultType());
}

private static KeyStore buildNewKeyStore(String type) throws GeneralSecurityException {
KeyStore keyStore = KeyStore.getInstance(type);
try {
keyStore.load(null, null);
} catch (IOException e) {
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -151,11 +151,8 @@ public int intValue(boolean coerce) throws IOException {

protected abstract int doIntValue() throws IOException;

private static BigInteger LONG_MAX_VALUE_AS_BIGINTEGER = BigInteger.valueOf(Long.MAX_VALUE);
private static BigInteger LONG_MIN_VALUE_AS_BIGINTEGER = BigInteger.valueOf(Long.MIN_VALUE);
// weak bounds on the BigDecimal representation to allow for coercion
private static BigDecimal BIGDECIMAL_GREATER_THAN_LONG_MAX_VALUE = BigDecimal.valueOf(Long.MAX_VALUE).add(BigDecimal.ONE);
private static BigDecimal BIGDECIMAL_LESS_THAN_LONG_MIN_VALUE = BigDecimal.valueOf(Long.MIN_VALUE).subtract(BigDecimal.ONE);
private static final BigInteger LONG_MAX_VALUE_AS_BIGINTEGER = BigInteger.valueOf(Long.MAX_VALUE);
private static final BigInteger LONG_MIN_VALUE_AS_BIGINTEGER = BigInteger.valueOf(Long.MIN_VALUE);

/** Return the long that {@code stringValue} stores or throws an exception if the
* stored value cannot be converted to a long that stores the exact same
Expand All @@ -170,11 +167,21 @@ private static long toLong(String stringValue, boolean coerce) {
final BigInteger bigIntegerValue;
try {
final BigDecimal bigDecimalValue = new BigDecimal(stringValue);
if (bigDecimalValue.compareTo(BIGDECIMAL_GREATER_THAN_LONG_MAX_VALUE) >= 0
|| bigDecimalValue.compareTo(BIGDECIMAL_LESS_THAN_LONG_MIN_VALUE) <= 0) {
// long can have a maximum of 19 digits - any more than that cannot be a long
// the scale is stored as the negation, so negative scale -> big number
if (bigDecimalValue.scale() < -19) {
throw new IllegalArgumentException("Value [" + stringValue + "] is out of range for a long");
}
bigIntegerValue = coerce ? bigDecimalValue.toBigInteger() : bigDecimalValue.toBigIntegerExact();
// large scale -> very small number
if (bigDecimalValue.scale() > 19) {
if (coerce) {
bigIntegerValue = BigInteger.ZERO;
} else {
throw new ArithmeticException("Number has a decimal part");
}
} else {
bigIntegerValue = coerce ? bigDecimalValue.toBigInteger() : bigDecimalValue.toBigIntegerExact();
}
} catch (ArithmeticException e) {
throw new IllegalArgumentException("Value [" + stringValue + "] has a decimal part");
} catch (NumberFormatException e) {
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -31,6 +31,7 @@
import static org.hamcrest.Matchers.hasSize;
import static org.hamcrest.Matchers.in;
import static org.hamcrest.Matchers.instanceOf;
import static org.hamcrest.Matchers.is;
import static org.hamcrest.Matchers.nullValue;
import static org.junit.internal.matchers.ThrowableMessageMatcher.hasMessage;

Expand Down Expand Up @@ -74,6 +75,44 @@ public void testFloat() throws IOException {
}
}

public void testLongCoercion() throws IOException {
XContentType xContentType = randomFrom(XContentType.values());

try (XContentBuilder builder = XContentBuilder.builder(xContentType.xContent())) {
builder.startObject();
builder.field("decimal", "5.5");
builder.field("expInRange", "5e18");
builder.field("expTooBig", "2e100");
builder.field("expTooSmall", "2e-100");
builder.endObject();

try (XContentParser parser = createParser(xContentType.xContent(), BytesReference.bytes(builder))) {
assertThat(parser.nextToken(), is(XContentParser.Token.START_OBJECT));

assertThat(parser.nextToken(), is(XContentParser.Token.FIELD_NAME));
assertThat(parser.currentName(), is("decimal"));
assertThat(parser.nextToken(), is(XContentParser.Token.VALUE_STRING));
assertThat(parser.longValue(), equalTo(5L));

assertThat(parser.nextToken(), is(XContentParser.Token.FIELD_NAME));
assertThat(parser.currentName(), is("expInRange"));
assertThat(parser.nextToken(), is(XContentParser.Token.VALUE_STRING));
assertThat(parser.longValue(), equalTo((long) 5e18));

assertThat(parser.nextToken(), is(XContentParser.Token.FIELD_NAME));
assertThat(parser.currentName(), is("expTooBig"));
assertThat(parser.nextToken(), is(XContentParser.Token.VALUE_STRING));
expectThrows(IllegalArgumentException.class, parser::longValue);

// too small goes to zero
assertThat(parser.nextToken(), is(XContentParser.Token.FIELD_NAME));
assertThat(parser.currentName(), is("expTooSmall"));
assertThat(parser.nextToken(), is(XContentParser.Token.VALUE_STRING));
assertThat(parser.longValue(), equalTo(0L));
}
}
}

public void testReadList() throws IOException {
assertThat(readList("{\"foo\": [\"bar\"]}"), contains("bar"));
assertThat(readList("{\"foo\": [\"bar\",\"baz\"]}"), contains("bar", "baz"));
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,6 @@
import org.elasticsearch.aggregations.AggregationIntegTestCase;
import org.elasticsearch.aggregations.bucket.timeseries.InternalTimeSeries;
import org.elasticsearch.aggregations.bucket.timeseries.TimeSeriesAggregationBuilder;
import org.elasticsearch.common.settings.Settings;
import org.elasticsearch.index.mapper.DateFieldMapper;
import org.elasticsearch.index.mapper.MapperService;
import org.elasticsearch.index.query.MatchAllQueryBuilder;
Expand Down Expand Up @@ -102,15 +101,11 @@ private CreateIndexResponse prepareTimeSeriesIndex(
final String[] routingDimensions
) {
return prepareCreate("index").setSettings(
Settings.builder()
.put("mode", "time_series")
indexSettings(randomIntBetween(1, 3), randomIntBetween(1, 3)).put("mode", "time_series")
.put("routing_path", String.join(",", routingDimensions))
.put("index.number_of_shards", randomIntBetween(1, 3))
.put("index.number_of_replicas", randomIntBetween(1, 3))
.put("time_series.start_time", startMillis)
.put("time_series.end_time", endMillis)
.put(MapperService.INDEX_MAPPING_FIELD_NAME_LENGTH_LIMIT_SETTING.getKey(), 4192)
.build()
).setMapping(mapping).get();
}

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,6 @@
import org.elasticsearch.aggregations.AggregationIntegTestCase;
import org.elasticsearch.aggregations.bucket.timeseries.InternalTimeSeries;
import org.elasticsearch.aggregations.bucket.timeseries.TimeSeriesAggregationBuilder;
import org.elasticsearch.common.settings.Settings;
import org.elasticsearch.index.mapper.DateFieldMapper;
import org.elasticsearch.index.mapper.MapperService;
import org.elasticsearch.index.query.MatchAllQueryBuilder;
Expand Down Expand Up @@ -103,15 +102,11 @@ private CreateIndexResponse prepareTimeSeriesIndex(
final String[] routingDimensions
) {
return prepareCreate("index").setSettings(
Settings.builder()
.put("mode", "time_series")
indexSettings(randomIntBetween(1, 3), randomIntBetween(1, 3)).put("mode", "time_series")
.put("routing_path", String.join(",", routingDimensions))
.put("index.number_of_shards", randomIntBetween(1, 3))
.put("index.number_of_replicas", randomIntBetween(1, 3))
.put("time_series.start_time", startMillis)
.put("time_series.end_time", endMillis)
.put(MapperService.INDEX_MAPPING_FIELD_NAME_LENGTH_LIMIT_SETTING.getKey(), 4192)
.build()
).setMapping(mapping).get();
}

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -112,11 +112,7 @@ public void resetClusterSetting() {
public void testRolloverOnAutoShardCondition() throws Exception {
final String dataStreamName = "logs-es";

putComposableIndexTemplate(
"my-template",
List.of("logs-*"),
Settings.builder().put(IndexMetadata.SETTING_NUMBER_OF_SHARDS, 3).put(IndexMetadata.SETTING_NUMBER_OF_REPLICAS, 0).build()
);
putComposableIndexTemplate("my-template", List.of("logs-*"), indexSettings(3, 0).build());
final var createDataStreamRequest = new CreateDataStreamAction.Request(dataStreamName);
assertAcked(client().execute(CreateDataStreamAction.INSTANCE, createDataStreamRequest).actionGet());

Expand Down Expand Up @@ -277,11 +273,7 @@ public void testReduceShardsOnRollover() throws IOException {
final String dataStreamName = "logs-es";

// start with 3 shards
putComposableIndexTemplate(
"my-template",
List.of("logs-*"),
Settings.builder().put(IndexMetadata.SETTING_NUMBER_OF_SHARDS, 3).put(IndexMetadata.SETTING_NUMBER_OF_REPLICAS, 0).build()
);
putComposableIndexTemplate("my-template", List.of("logs-*"), indexSettings(3, 0).build());
final var createDataStreamRequest = new CreateDataStreamAction.Request(dataStreamName);
assertAcked(client().execute(CreateDataStreamAction.INSTANCE, createDataStreamRequest).actionGet());

Expand Down Expand Up @@ -391,11 +383,7 @@ public void testReduceShardsOnRollover() throws IOException {
public void testLazyRolloverKeepsPreviousAutoshardingDecision() throws IOException {
final String dataStreamName = "logs-es";

putComposableIndexTemplate(
"my-template",
List.of("logs-*"),
Settings.builder().put(IndexMetadata.SETTING_NUMBER_OF_SHARDS, 3).put(IndexMetadata.SETTING_NUMBER_OF_REPLICAS, 0).build()
);
putComposableIndexTemplate("my-template", List.of("logs-*"), indexSettings(3, 0).build());
final var createDataStreamRequest = new CreateDataStreamAction.Request(dataStreamName);
assertAcked(client().execute(CreateDataStreamAction.INSTANCE, createDataStreamRequest).actionGet());

Expand Down
Loading

0 comments on commit 0b8278a

Please sign in to comment.