Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[8.x] [Fleet/EA] Logstash & Kafka Outputs refresh (#1306) #1358

Merged
merged 1 commit into from
Oct 1, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
Expand Up @@ -44,6 +44,29 @@ outputs:
verification_mode: full
----

== Kafka output and using {ls} to index data to {es}

If you are considering using {ls} to ship the data from `kafka` to {es}, please
be aware Elastic is not currently testing this kind of setup.

The structure of the documents sent from {agent} to `kafka` must not be modified by {ls}.
We suggest disabling `ecs_compatibility` on both the `kafka` input and the `json` codec.

Refer to <<logstash-output,{ls} output for {agent}>> documentation for more details.

[source,yaml]
----
inputs {
kafka {
...
ecs_compatibility => "disabled"
codec => json { ecs_compatibility => "disabled" }
...
}
}
...
----

== Kafka output configuration settings

The `kafka` output supports the following settings, grouped by category.
Expand Down Expand Up @@ -502,4 +525,4 @@ Note: If set to 0, no ACKs are returned by Kafka. Messages might be lost silentl

// =============================================================================

|===
|===
Original file line number Diff line number Diff line change
Expand Up @@ -32,28 +32,38 @@ To receive the events in {ls}, you also need to create a {ls} configuration pipe
The {ls} configuration pipeline listens for incoming {agent} connections,
processes received events, and then sends the events to {es}.

The following example configures a {ls} pipeline that listens on port `5044` for
incoming {agent} connections and routes received events to {es}:
The following {ls} pipeline definition example configures a pipeline that listens on port `5044` for
incoming {agent} connections and routes received events to {es}.


[source,yaml]
----
input {
elastic_agent {
port => 5044
enrich => none # don't modify the events' schema at all
# or minimal change, add only ssl and source metadata
# enrich => [ssl_peer_metadata, source_metadata]
ssl => true
ssl_certificate_authorities => ["<ca_path>"]
ssl_certificate => "<server_cert_path>"
ssl_key => "<server_cert_key_in_pkcs8>"
ssl_verify_mode => "force_peer"
}
}

output {
elasticsearch {
hosts => ["http://localhost:9200"] <1>
# cloud_id => "..."
data_stream => "true"
api_key => "<api_key>" <2>
data_stream => true
ssl => true
# cacert => "<elasticsearch_ca_path>"
}
}
----
<1> The {es} server and the port (`9200`) where {es} is running.
<2> The API Key used by {ls} to ship data to the destination data streams.

For more information about configuring {ls}, refer to
{logstash-ref}/configuration.html[Configuring {ls}] and
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -5,6 +5,29 @@

Specify these settings to send data over a secure connection to Kafka. In the {fleet} <<output-settings,Output settings>>, make sure that the Kafka output type is selected.

== Kafka output and using {ls} to index data to {es}

If you are considering using {ls} to ship the data from `kafka` to {es}, please
be aware Elastic is not currently testing this kind of setup.

The structure of the documents sent from {agent} to `kafka` must not be modified by {ls}.
We suggest disabling `ecs_compatibility` on both the `kafka` input and the `json` codec.

Refer to the <<ls-output-settings,{ls} output for {agent}>> documentation for more details.

[source,yaml]
----
inputs {
kafka {
...
ecs_compatibility => "disabled"
codec => json { ecs_compatibility => "disabled" }
...
}
}
...
----

[discrete]
== General settings

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -13,6 +13,44 @@ Before using the {ls} output, you need to make sure that for any integrations th

To learn how to generate certificates, refer to <<secure-logstash-connections>>.

To receive the events in {ls}, you also need to create a {ls} configuration pipeline.
The {ls} configuration pipeline listens for incoming {agent} connections,
processes received events, and then sends the events to {es}.

The following example configures a {ls} pipeline that listens on port `5044` for
incoming {agent} connections and routes received events to {es}.

The {ls} pipeline definition below is an example. Please refer to the `Additional Logstash
configuration required` steps when creating the {ls} output in the Fleet outputs page.

[source,yaml]
----
input {
elastic_agent {
port => 5044
enrich => none # don't modify the events' schema at all
ssl => true
ssl_certificate_authorities => ["<ca_path>"]
ssl_certificate => "<server_cert_path>"
ssl_key => "<server_cert_key_in_pkcs8>"
ssl_verify_mode => "force_peer"
}
}
output {
elasticsearch {
hosts => ["http://localhost:9200"] <1>
# cloud_id => "..."
data_stream => "true"
api_key => "<api_key>" <2>
data_stream => true
ssl => true
# cacert => "<elasticsearch_ca_path>"
}
}
----
<1> The {es} server and the port (`9200`) where {es} is running.
<2> The API Key obtained from the {ls} output creation steps in Fleet.

[cols="2*<a"]
|===
|
Expand Down Expand Up @@ -196,4 +234,4 @@ include::../elastic-agent/configuration/outputs/output-shared-settings.asciidoc[

|===

:type!:
:type!: