Skip to content

Commit

Permalink
Reindent configuration file example, improve readability, etc. (#1441)
Browse files Browse the repository at this point in the history
* pipeline: inputs: opentelemetry: tabs -> 4 spaces

Signed-off-by: Seonghyeon Cho <[email protected]>

* pipeline: inputs: prometheus-remote-write: tabs -> 4 spaces

Signed-off-by: Seonghyeon Cho <[email protected]>

* pipeline: filters: kubernetes: yaml syntax highlight

Signed-off-by: Seonghyeon Cho <[email protected]>

* pipeline: filters: kubernetes: reindent config file

Signed-off-by: Seonghyeon Cho <[email protected]>

* pipeline: filters: lua: reindent config

Signed-off-by: Seonghyeon Cho <[email protected]>

* pipeline: outputs: chronicle: Remove dashes which causes markdown break

Signed-off-by: Seonghyeon Cho <[email protected]>

* pipeline: outputs: oci-logging-analytics: Add newlines

Signed-off-by: Seonghyeon Cho <[email protected]>

* pipeline: outputs: oci-logging-analytics: reindent json fields

Signed-off-by: Seonghyeon Cho <[email protected]>

* pipeline: outputs: s3: reindent with 4 spaces

Signed-off-by: Seonghyeon Cho <[email protected]>

* pipeline: outputs: vivo-exporter: Remove unusual line terminator

LSEP(U+2028)

Signed-off-by: Seonghyeon Cho <[email protected]>

* pipeline: outputs: websocket: Add newline

Signed-off-by: Seonghyeon Cho <[email protected]>

---------

Signed-off-by: Seonghyeon Cho <[email protected]>
  • Loading branch information
sh-cho authored Aug 22, 2024
1 parent fb9458a commit f689765
Show file tree
Hide file tree
Showing 9 changed files with 120 additions and 114 deletions.
46 changes: 23 additions & 23 deletions pipeline/filters/kubernetes.md
Original file line number Diff line number Diff line change
Expand Up @@ -271,7 +271,7 @@ There are some configuration setup needed for this feature.

Role Configuration for Fluent Bit DaemonSet Example:

```text
```yaml
---
apiVersion: v1
kind: ServiceAccount
Expand Down Expand Up @@ -314,34 +314,34 @@ The difference is that kubelet need a special permission for resource `nodes/pro
Fluent Bit Configuration Example:

```text
[INPUT]
Name tail
Tag kube.*
Path /var/log/containers/*.log
DB /var/log/flb_kube.db
Parser docker
Docker_Mode On
Mem_Buf_Limit 50MB
Skip_Long_Lines On
Refresh_Interval 10
[FILTER]
Name kubernetes
Match kube.*
Kube_URL https://kubernetes.default.svc.cluster.local:443
Kube_CA_File /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
Kube_Token_File /var/run/secrets/kubernetes.io/serviceaccount/token
Merge_Log On
Buffer_Size 0
Use_Kubelet true
Kubelet_Port 10250
[INPUT]
Name tail
Tag kube.*
Path /var/log/containers/*.log
DB /var/log/flb_kube.db
Parser docker
Docker_Mode On
Mem_Buf_Limit 50MB
Skip_Long_Lines On
Refresh_Interval 10
[FILTER]
Name kubernetes
Match kube.*
Kube_URL https://kubernetes.default.svc.cluster.local:443
Kube_CA_File /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
Kube_Token_File /var/run/secrets/kubernetes.io/serviceaccount/token
Merge_Log On
Buffer_Size 0
Use_Kubelet true
Kubelet_Port 10250
```

So for fluent bit configuration, you need to set the `Use_Kubelet` to true to enable this feature.

DaemonSet config Example:

```text
```yaml
---
apiVersion: apps/v1
kind: DaemonSet
Expand Down
46 changes: 23 additions & 23 deletions pipeline/filters/lua.md
Original file line number Diff line number Diff line change
Expand Up @@ -196,12 +196,12 @@ We want to extract the `sandboxbsh` name and add it to our record as a special k
{% tabs %}
{% tab title="fluent-bit.conf" %}
```
[FILTER]
Name lua
Alias filter-iots-lua
Match iots_thread.*
Script filters.lua
Call set_landscape_deployment
[FILTER]
Name lua
Alias filter-iots-lua
Match iots_thread.*
Script filters.lua
Call set_landscape_deployment
```
{% endtab %}

Expand Down Expand Up @@ -358,23 +358,23 @@ Configuration to get istio logs and apply response code filter to them.
{% tabs %}
{% tab title="fluent-bit.conf" %}
```ini
[INPUT]
Name tail
Path /var/log/containers/*_istio-proxy-*.log
multiline.parser docker, cri
Tag istio.*
Mem_Buf_Limit 64MB
Skip_Long_Lines Off

[FILTER]
Name lua
Match istio.*
Script response_code_filter.lua
call cb_response_code_filter

[Output]
Name stdout
Match *
[INPUT]
Name tail
Path /var/log/containers/*_istio-proxy-*.log
multiline.parser docker, cri
Tag istio.*
Mem_Buf_Limit 64MB
Skip_Long_Lines Off

[FILTER]
Name lua
Match istio.*
Script response_code_filter.lua
call cb_response_code_filter

[Output]
Name stdout
Match *
```
{% endtab %}

Expand Down
10 changes: 5 additions & 5 deletions pipeline/inputs/opentelemetry.md
Original file line number Diff line number Diff line change
Expand Up @@ -79,13 +79,13 @@ pipeline:
{% tab title="fluent-bit.conf" %}
```
[INPUT]
name opentelemetry
listen 127.0.0.1
port 4318
name opentelemetry
listen 127.0.0.1
port 4318

[OUTPUT]
name stdout
match *
name stdout
match *
```
{% endtab %}

Expand Down
26 changes: 13 additions & 13 deletions pipeline/inputs/prometheus-remote-write.md
Original file line number Diff line number Diff line change
Expand Up @@ -26,14 +26,14 @@ A sample config file to get started will look something like the following:
{% tab title="fluent-bit.conf" %}
```
[INPUT]
name prometheus_remote_write
listen 127.0.0.1
port 8080
uri /api/prom/push
name prometheus_remote_write
listen 127.0.0.1
port 8080
uri /api/prom/push
[OUTPUT]
name stdout
match *
name stdout
match *
```
{% endtab %}

Expand Down Expand Up @@ -65,13 +65,13 @@ Communicating with TLS, you will need to use the tls related parameters:

```
[INPUT]
Name prometheus_remote_write
Listen 127.0.0.1
Port 8080
Uri /api/prom/push
Tls On
tls.crt_file /path/to/certificate.crt
tls.key_file /path/to/certificate.key
Name prometheus_remote_write
Listen 127.0.0.1
Port 8080
Uri /api/prom/push
Tls On
tls.crt_file /path/to/certificate.crt
tls.key_file /path/to/certificate.key
```

Now, you should be able to send data over TLS to the remote write input.
2 changes: 0 additions & 2 deletions pipeline/outputs/chronicle.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,3 @@
---

# Chronicle

The Chronicle output plugin allows ingesting security logs into [Google Chronicle](https://chronicle.security/) service. This connector is designed to send unstructured security logs.
Expand Down
19 changes: 13 additions & 6 deletions pipeline/outputs/oci-logging-analytics.md
Original file line number Diff line number Diff line change
Expand Up @@ -86,11 +86,13 @@ In case of multiple inputs, where oci_la_* properties can differ, you can add th
[INPUT]
Name dummy
Tag dummy
[Filter]
Name modify
Match *
Add oci_la_log_source_name <LOG_SOURCE_NAME>
Add oci_la_log_group_id <LOG_GROUP_OCID>
[Output]
Name oracle_log_analytics
Match *
Expand All @@ -109,6 +111,7 @@ You can attach certain metadata to the log events collected from various inputs.
[INPUT]
Name dummy
Tag dummy
[Output]
Name oracle_log_analytics
Match *
Expand Down Expand Up @@ -138,12 +141,12 @@ The above configuration will generate a payload that looks like this
"metadata": {
"key1": "value1",
"key2": "value2"
},
"logSourceName": "example_log_source",
"logRecords": [
"dummy"
]
}
},
"logSourceName": "example_log_source",
"logRecords": [
"dummy"
]
}
]
}
```
Expand All @@ -156,23 +159,27 @@ With oci_config_in_record option set to true, the metadata key-value pairs will
[INPUT]
Name dummy
Tag dummy
[FILTER]
Name Modify
Match *
Add olgm.key1 val1
Add olgm.key2 val2
[FILTER]
Name nest
Match *
Operation nest
Wildcard olgm.*
Nest_under oci_la_global_metadata
Remove_prefix olgm.
[Filter]
Name modify
Match *
Add oci_la_log_source_name <LOG_SOURCE_NAME>
Add oci_la_log_group_id <LOG_GROUP_OCID>
[Output]
Name oracle_log_analytics
Match *
Expand Down
82 changes: 41 additions & 41 deletions pipeline/outputs/s3.md
Original file line number Diff line number Diff line change
Expand Up @@ -198,13 +198,13 @@ The following settings are recommended for this use case:

```
[OUTPUT]
Name s3
Match *
bucket your-bucket
region us-east-1
total_file_size 1M
upload_timeout 1m
use_put_object On
Name s3
Match *
bucket your-bucket
region us-east-1
total_file_size 1M
upload_timeout 1m
use_put_object On
```

## S3 Multipart Uploads
Expand Down Expand Up @@ -252,14 +252,14 @@ Example:

```
[OUTPUT]
Name s3
Match *
bucket your-bucket
region us-east-1
total_file_size 1M
upload_timeout 1m
use_put_object On
workers 1
Name s3
Match *
bucket your-bucket
region us-east-1
total_file_size 1M
upload_timeout 1m
use_put_object On
workers 1
```

If you enable a single worker, you are enabling a dedicated thread for your S3 output. We recommend starting without workers, evaluating the performance, and then enabling a worker if needed. For most users, the plugin can provide sufficient throughput without workers.
Expand All @@ -274,10 +274,10 @@ Example:

```
[OUTPUT]
Name s3
Match *
bucket your-bucket
endpoint http://localhost:9000
Name s3
Match *
bucket your-bucket
endpoint http://localhost:9000
```

Then, the records will be stored into the MinIO server.
Expand All @@ -300,27 +300,27 @@ In your main configuration file append the following _Output_ section:

```
[OUTPUT]
Name s3
Match *
bucket your-bucket
region us-east-1
store_dir /home/ec2-user/buffer
total_file_size 50M
upload_timeout 10m
Name s3
Match *
bucket your-bucket
region us-east-1
store_dir /home/ec2-user/buffer
total_file_size 50M
upload_timeout 10m
```

An example that using PutObject instead of multipart:

```
[OUTPUT]
Name s3
Match *
bucket your-bucket
region us-east-1
store_dir /home/ec2-user/buffer
use_put_object On
total_file_size 10M
upload_timeout 10m
Name s3
Match *
bucket your-bucket
region us-east-1
store_dir /home/ec2-user/buffer
use_put_object On
total_file_size 10M
upload_timeout 10m
```

## AWS for Fluent Bit
Expand Down Expand Up @@ -387,15 +387,15 @@ Once compiled, Fluent Bit can upload incoming data to S3 in Apache Arrow format.

```
[INPUT]
Name cpu
Name cpu
[OUTPUT]
Name s3
Bucket your-bucket-name
total_file_size 1M
use_put_object On
upload_timeout 60s
Compression arrow
Name s3
Bucket your-bucket-name
total_file_size 1M
use_put_object On
upload_timeout 60s
Compression arrow
```

As shown in this example, setting `Compression` to `arrow` makes Fluent Bit to convert payload into Apache Arrow format.
Expand Down
2 changes: 1 addition & 1 deletion pipeline/outputs/vivo-exporter.md
Original file line number Diff line number Diff line change
Expand Up @@ -25,7 +25,7 @@ Here is a simple configuration of Vivo Exporter, note that this example is not b
match *
empty_stream_on_read off
stream_queue_size 20M
http_cors_allow_origin *
http_cors_allow_origin *
```

### How it works
Expand Down
Loading

0 comments on commit f689765

Please sign in to comment.