Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[chore] Spelling exporter #37133

Open
wants to merge 117 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
117 commits
Select commit Hold shift + click to select a range
2fcbed5
spelling: add
jsoref Jan 7, 2025
9265414
spelling: aggregated
jsoref Jan 5, 2025
fb96227
spelling: aggregates
jsoref Jan 5, 2025
e1c7bca
spelling: aggregations
jsoref Jan 5, 2025
f2672e2
spelling: alternate
jsoref Jan 5, 2025
47e54ba
spelling: ambiguous
jsoref Jan 5, 2025
846d70f
spelling: and
jsoref Jan 7, 2025
f289dfa
spelling: annotation
jsoref Jan 5, 2025
84ebc21
spelling: applicable
jsoref Jan 5, 2025
7909acc
spelling: attribute
jsoref Jan 5, 2025
53432a0
spelling: attributes
jsoref Jan 5, 2025
dbc6ae1
spelling: batch
jsoref Jan 5, 2025
bdee575
spelling: be
jsoref Jan 7, 2025
f1582c3
spelling: builder
jsoref Jan 5, 2025
cf2a16e
spelling: cannot
jsoref Jan 3, 2025
7e43bfd
spelling: case-sensitive
jsoref Jan 3, 2025
5d9a544
spelling: collisions
jsoref Jan 6, 2025
7d1d008
spelling: connections
jsoref Jan 6, 2025
fe9d4b0
spelling: consider
jsoref Jan 6, 2025
ac04031
spelling: constituent
jsoref Jan 6, 2025
02ba0b0
spelling: consumer
jsoref Jan 6, 2025
5464a68
spelling: content
jsoref Jan 6, 2025
221ec68
spelling: corresponding
jsoref Jan 6, 2025
26c656e
spelling: creator-on
jsoref Jan 8, 2025
0284a8a
spelling: current
jsoref Jan 8, 2025
a40ac8e
spelling: datadog
jsoref Jan 6, 2025
9bef021
spelling: datapoint
jsoref Jan 6, 2025
9455c7b
spelling: delta
jsoref Jan 6, 2025
a92993f
spelling: descriptor
jsoref Jan 6, 2025
28fb631
spelling: deterministically
jsoref Jan 7, 2025
c0fccd2
spelling: disabled
jsoref Jan 7, 2025
d93863d
spelling: distinguish
jsoref Jan 7, 2025
03f2a50
spelling: document
jsoref Jan 7, 2025
650047c
spelling: encountered
jsoref Jan 7, 2025
e4ec0c9
spelling: endpoints
jsoref Jan 7, 2025
2ee274e
spelling: event
jsoref Jan 7, 2025
aad2028
spelling: exactly
jsoref Jan 7, 2025
f39bcdf
spelling: exemplar
jsoref Jan 8, 2025
bd7f879
spelling: exporter
jsoref Jan 7, 2025
81166ad
spelling: failed
jsoref Jan 7, 2025
305f38a
spelling: filtering
jsoref Jan 7, 2025
f003e52
spelling: flaky
jsoref Jan 7, 2025
07486e0
spelling: for
jsoref Jan 3, 2025
71cf274
spelling: full
jsoref Jan 7, 2025
63535e1
spelling: gauge
jsoref Jan 7, 2025
1378d99
spelling: googlemanagedprometheus
jsoref Jan 8, 2025
57fa17d
spelling: greater
jsoref Jan 3, 2025
42ffd46
spelling: here's
jsoref Jan 7, 2025
928d1a7
spelling: histogram
jsoref Jan 7, 2025
cf0d785
spelling: http-client
jsoref Jan 3, 2025
22cde2a
spelling: httpforwarder
jsoref Jan 7, 2025
bd46be8
spelling: id
jsoref Jan 3, 2025
f7e0b83
spelling: in increased
jsoref Jan 7, 2025
8a863b0
spelling: instrumentation
jsoref Jan 7, 2025
0c3bca5
spelling: internal
jsoref Jan 7, 2025
3c2b247
spelling: is equal
jsoref Jan 3, 2025
64f6179
spelling: its
jsoref Jan 3, 2025
4c09785
spelling: loadbalancing
jsoref Jan 7, 2025
3926a2e
spelling: marshaling
jsoref Jan 8, 2025
e0127bd
spelling: mccontainer
jsoref Jan 8, 2025
04f14ec
spelling: metrics
jsoref Jan 7, 2025
16f4c86
spelling: milliseconds
jsoref Jan 8, 2025
6875822
spelling: misleading
jsoref Jan 8, 2025
62512df
spelling: name
jsoref Jan 7, 2025
a952671
spelling: nonexistent
jsoref Jan 3, 2025
ff20e00
spelling: of
jsoref Jan 7, 2025
b69a131
spelling: opentelemetry
jsoref Jan 7, 2025
2065842
spelling: option
jsoref Jan 8, 2025
836a26e
spelling: partition
jsoref Jan 8, 2025
2e3ed4c
spelling: path
jsoref Jan 7, 2025
c3fa417
spelling: pending
jsoref Jan 8, 2025
c6bcfe3
spelling: permanent
jsoref Jan 7, 2025
0840355
spelling: profiles
jsoref Jan 8, 2025
38482c9
spelling: prometheus
jsoref Jan 7, 2025
6bbe5a1
spelling: request
jsoref Jan 7, 2025
59e6da6
spelling: requests
jsoref Jan 7, 2025
71bf56e
spelling: resource
jsoref Jan 7, 2025
7c6cbf9
spelling: response
jsoref Jan 7, 2025
12bc05a
spelling: running
jsoref Jan 7, 2025
8f275a4
spelling: schema
jsoref Jan 7, 2025
3cd0a1c
spelling: sensible
jsoref Jan 7, 2025
a65b0fc
spelling: service
jsoref Jan 7, 2025
8a6891f
spelling: stacktrace
jsoref Jan 7, 2025
1ff6c02
spelling: statement
jsoref Jan 8, 2025
fd8b880
spelling: still
jsoref Jan 3, 2025
54db066
spelling: the
jsoref Jan 3, 2025
b53d4b6
spelling: traces
jsoref Jan 7, 2025
a0dd035
spelling: transient
jsoref Jan 7, 2025
f0ebfa9
spelling: truncation
jsoref Jan 8, 2025
e910b9d
spelling: values
jsoref Jan 8, 2025
5e79ab6
spelling: whether
jsoref Jan 7, 2025
dce2add
spelling: without
jsoref Jan 6, 2025
4bf36ef
spelling: writing
jsoref Jan 8, 2025
56ec407
link: OpenTelemetry Logs Data Model specification
jsoref Jan 3, 2025
119ceba
link: Prometheus label names standard
jsoref Jan 3, 2025
278ed2d
link: Sentry Span
jsoref Jan 3, 2025
b6a2604
link: native otlphttp exporter
jsoref Jan 5, 2025
1690af2
link: authentication token provided by Splunk Observability Cloud
jsoref Jan 3, 2025
a98b8db
link: batching configuration on the exporter
jsoref Jan 5, 2025
f64a563
link: default metrics
jsoref Jan 5, 2025
ba97925
link: distributor config parameter
jsoref Jan 3, 2025
5c22bcf
link: exporter
jsoref Jan 5, 2025
70c9736
link: exporterhelper configuration parameters
jsoref Jan 3, 2025
3699cff
link: exporterhelper
jsoref Jan 3, 2025
47a987c
link: full list of `ServerConfig`
jsoref Jan 3, 2025
eb096a8
link: grpc's
jsoref Jan 3, 2025
b244caa
link: how to configure the OpenTelemetry integration
jsoref Jan 3, 2025
b0e510c
link: how to send logs to Grafana Loki using the OpenTelemetry Collector
jsoref Jan 3, 2025
28adca2
link: interface for a Sentry Transaction
jsoref Jan 5, 2025
0c0625f
link: offers proxy support
jsoref Jan 5, 2025
b1979e2
link: proxy support
jsoref Jan 3, 2025
87b9f1d
link: resourcemapping.go
jsoref Jan 3, 2025
630fe22
link: signalfx/sapm-proto
jsoref Jan 3, 2025
893cea6
link: trace_to_envelope.go
jsoref Jan 3, 2025
9a6ca93
link: moving the Sumo Logic exporter into this repository
jsoref Jan 9, 2025
03ae19f
link: config.go
jsoref Jan 3, 2025
f1a5b49
link: testdata/config.yaml
jsoref Jan 3, 2025
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
Expand Up @@ -278,7 +278,7 @@ func TestAlertManagerTracesExporterNoErrors(t *testing.T) {

type (
MockServer struct {
mockserver *httptest.Server // this means MockServer aggreagates 'httptest.Server', but can it's more like inheritance in C++
mockserver *httptest.Server // this means MockServer aggregates 'httptest.Server', but can it's more like inheritance in C++
fooCalledSuccessfully bool // this is false by default
}
)
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -29,7 +29,7 @@ func createLogData(numberOfLogs int) plog.Logs {
logs := plog.NewLogs()
logs.ResourceLogs().AppendEmpty() // Add an empty ResourceLogs
rl := logs.ResourceLogs().AppendEmpty()
rl.Resource().Attributes().PutStr("resouceKey", "resourceValue")
rl.Resource().Attributes().PutStr("resourceKey", "resourceValue")
rl.Resource().Attributes().PutStr(conventions.AttributeServiceName, "test-log-service-exporter")
rl.Resource().Attributes().PutStr(conventions.AttributeHostName, "test-host")
sl := rl.ScopeLogs().AppendEmpty()
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,7 @@ import (
"go.uber.org/zap"
)

// newMetricsExporter return a new LogSerice metrics exporter.
// newMetricsExporter return a new LogService metrics exporter.
func newMetricsExporter(set exporter.Settings, cfg component.Config) (exporter.Metrics, error) {
l := &logServiceMetricsSender{
logger: set.Logger,
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@
},
{
"Key": "resource",
"Value": "{\"resouceKey\":\"resourceValue\"}"
"Value": "{\"resourceKey\":\"resourceValue\"}"
},
{
"Key": "otlp.name",
Expand Down Expand Up @@ -64,7 +64,7 @@
},
{
"Key": "resource",
"Value": "{\"resouceKey\":\"resourceValue\"}"
"Value": "{\"resourceKey\":\"resourceValue\"}"
},
{
"Key": "otlp.name",
Expand Down Expand Up @@ -118,7 +118,7 @@
},
{
"Key": "resource",
"Value": "{\"resouceKey\":\"resourceValue\"}"
"Value": "{\"resourceKey\":\"resourceValue\"}"
},
{
"Key": "otlp.name",
Expand Down Expand Up @@ -172,7 +172,7 @@
},
{
"Key": "resource",
"Value": "{\"resouceKey\":\"resourceValue\"}"
"Value": "{\"resourceKey\":\"resourceValue\"}"
},
{
"Key": "otlp.name",
Expand Down Expand Up @@ -226,7 +226,7 @@
},
{
"Key": "resource",
"Value": "{\"resouceKey\":\"resourceValue\"}"
"Value": "{\"resourceKey\":\"resourceValue\"}"
},
{
"Key": "otlp.name",
Expand Down Expand Up @@ -280,7 +280,7 @@
},
{
"Key": "resource",
"Value": "{\"resouceKey\":\"resourceValue\"}"
"Value": "{\"resourceKey\":\"resourceValue\"}"
},
{
"Key": "otlp.name",
Expand Down Expand Up @@ -334,7 +334,7 @@
},
{
"Key": "resource",
"Value": "{\"resouceKey\":\"resourceValue\"}"
"Value": "{\"resourceKey\":\"resourceValue\"}"
},
{
"Key": "otlp.name",
Expand Down Expand Up @@ -388,7 +388,7 @@
},
{
"Key": "resource",
"Value": "{\"resouceKey\":\"resourceValue\"}"
"Value": "{\"resourceKey\":\"resourceValue\"}"
},
{
"Key": "otlp.name",
Expand Down Expand Up @@ -442,7 +442,7 @@
},
{
"Key": "resource",
"Value": "{\"resouceKey\":\"resourceValue\"}"
"Value": "{\"resourceKey\":\"resourceValue\"}"
},
{
"Key": "otlp.name",
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,7 @@ import (
"go.uber.org/zap"
)

// newTracesExporter return a new LogSerice trace exporter.
// newTracesExporter return a new LogService trace exporter.
func newTracesExporter(set exporter.Settings, cfg component.Config) (exporter.Traces, error) {
l := &logServiceTraceSender{
logger: set.Logger,
Expand Down
2 changes: 1 addition & 1 deletion exporter/awscloudwatchlogsexporter/config.go
Original file line number Diff line number Diff line change
Expand Up @@ -37,7 +37,7 @@ type Config struct {
// Possible values are 1, 3, 5, 7, 14, 30, 60, 90, 120, 150, 180, 365, 400, 545, 731, 1827, 2192, 2557, 2922, 3288, or 3653
LogRetention int64 `mapstructure:"log_retention"`

// Tags is the option to set tags for the CloudWatch Log Group. If specified, please add add at least 1 and at most 50 tags. Input is a string to string map like so: { 'key': 'value' }
// Tags is the option to set tags for the CloudWatch Log Group. If specified, please add at least 1 and at most 50 tags. Input is a string to string map like so: { 'key': 'value' }
// Keys must be between 1-128 characters and follow the regex pattern: ^([\p{L}\p{Z}\p{N}_.:/=+\-@]+)$
// Values must be between 1-256 characters and follow the regex pattern: ^([\p{L}\p{Z}\p{N}_.:/=+\-@]*)$
Tags map[string]*string `mapstructure:"tags"`
Expand Down
4 changes: 2 additions & 2 deletions exporter/awsemfexporter/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -41,7 +41,7 @@ The following exporter configuration parameters are supported.
| `role_arn` | IAM role to upload segments to a different account. | |
| `max_retries` | Maximum number of retries before abandoning an attempt to post data. | 1 |
| `dimension_rollup_option` | DimensionRollupOption is the option for metrics dimension rollup. Three options are available: `NoDimensionRollup`, `SingleDimensionRollupOnly` and `ZeroAndSingleDimensionRollup`. The default value is `ZeroAndSingleDimensionRollup`. Enabling feature gate `awsemf.nodimrollupdefault` will set default to `NoDimensionRollup`. |"ZeroAndSingleDimensionRollup" (Enable both zero dimension rollup and single dimension rollup)|
| `resource_to_telemetry_conversion` | "resource_to_telemetry_conversion" is the option for converting resource attributes to telemetry attributes. It has only one config onption- `enabled`. For metrics, if `enabled=true`, all the resource attributes will be converted to metric labels by default. See `Resource Attributes to Metric Labels` section below for examples. | `enabled=false` |
| `resource_to_telemetry_conversion` | "resource_to_telemetry_conversion" is the option for converting resource attributes to telemetry attributes. It has only one config option- `enabled`. For metrics, if `enabled=true`, all the resource attributes will be converted to metric labels by default. See `Resource Attributes to Metric Labels` section below for examples. | `enabled=false` |
| `output_destination` | "output_destination" is an option to specify the EMFExporter output. Currently, two options are available. "cloudwatch" or "stdout" | `cloudwatch` |
| `detailed_metrics` | Retain detailed datapoint values in exported metrics (e.g instead of exporting a quantile as a statistical value, preserve the quantile's population) | `false` |
| `parse_json_encoded_attr_values` | List of attribute keys whose corresponding values are JSON-encoded strings and will be converted to JSON structures in emf logs. For example, the attribute string value "{\\"x\\":5,\\"y\\":6}" will be converted to a json object: ```{"x": 5, "y": 6}``` | [ ] |
Expand Down Expand Up @@ -73,7 +73,7 @@ A metric descriptor section allows the schema of a metric to be overwritten befo
| Name | Description | Default |
| :---------------- | :--------------------------------------------------------------------- | ------- |
| `metric_name` | The name of the metric to be overwritten. | |
| `unit` | The overwritten value of unit. The [MetricDatum](https://docs.aws.amazon.com/AmazonCloudWatch/latest/APIReference/API_MetricDatum.html) contains a ful list of supported unit values. | |
| `unit` | The overwritten value of unit. The [MetricDatum](https://docs.aws.amazon.com/AmazonCloudWatch/latest/APIReference/API_MetricDatum.html) contains a full list of supported unit values. | |
| `overwrite` | `true` if the schema should be overwritten with the given specification, otherwise it will only be configured if empty. | false |


Expand Down
4 changes: 2 additions & 2 deletions exporter/awsemfexporter/config.go
Original file line number Diff line number Diff line change
Expand Up @@ -73,7 +73,7 @@ type Config struct {
// Note that at the moment in order to use this feature the value "kubernetes" must also be added to the ParseJSONEncodedAttributeValues array in order to be used
EKSFargateContainerInsightsEnabled bool `mapstructure:"eks_fargate_container_insights_enabled"`

// ResourceToTelemetrySettings is an option for converting resource attrihutes to telemetry attributes.
// ResourceToTelemetrySettings is an option for converting resource attributes to telemetry attributes.
// "Enabled" - A boolean field to enable/disable this option. Default is `false`.
// If enabled, all the resource attributes will be converted to metric labels by default.
ResourceToTelemetrySettings resourcetotelemetry.Settings `mapstructure:"resource_to_telemetry_conversion"`
Expand Down Expand Up @@ -124,7 +124,7 @@ func (config *Config) Validate() error {
if _, ok := eMFSupportedUnits[descriptor.Unit]; ok {
validDescriptors = append(validDescriptors, descriptor)
} else {
config.logger.Warn("Dropped unsupported metric desctriptor.", zap.String("unit", descriptor.Unit))
config.logger.Warn("Dropped unsupported metric descriptor.", zap.String("unit", descriptor.Unit))
}
}
config.MetricDescriptors = validDescriptors
Expand Down
2 changes: 1 addition & 1 deletion exporter/awsemfexporter/datapoint.go
Original file line number Diff line number Diff line change
Expand Up @@ -567,7 +567,7 @@ func getDataPoints(pmd pmetric.Metric, metadata cWMetricMetadata, logger *zap.Lo
// For summaries coming from the prometheus receiver, the sum and count are cumulative, whereas for summaries
// coming from other sources, e.g. SDK, the sum and count are delta by being accumulated and reset periodically.
// In order to ensure metrics are sent as deltas, we check the receiver attribute (which can be injected by
// attribute processor) from resource metrics. If it exists, and equals to prometheus, the sum and count will be
// attribute processor) from resource metrics. If it exists, and is equal to prometheus, the sum and count will be
// converted.
// For more information: https://github.com/open-telemetry/opentelemetry-collector/blob/main/receiver/prometheusreceiver/DESIGN.md#summary
metricMetadata.adjustToDelta = metadata.receiver == prometheusReceiver
Expand Down
8 changes: 4 additions & 4 deletions exporter/awsemfexporter/datapoint_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -1968,7 +1968,7 @@ func TestCreateLabels(t *testing.T) {
labels := createLabels(labelsMap, "")
assert.Equal(t, expectedLabels, labels)

// With isntrumentation library name
// With instrumentation library name
labels = createLabels(labelsMap, "cloudwatch-otel")
expectedLabels[oTellibDimensionKey] = "cloudwatch-otel"
assert.Equal(t, expectedLabels, labels)
Expand All @@ -1977,7 +1977,7 @@ func TestCreateLabels(t *testing.T) {
func TestGetDataPoints(t *testing.T) {
logger := zap.NewNop()

normalDeltraMetricMetadata := generateDeltaMetricMetadata(false, "foo", false)
normalDeltaMetricMetadata := generateDeltaMetricMetadata(false, "foo", false)
cumulativeDeltaMetricMetadata := generateDeltaMetricMetadata(true, "foo", false)

testCases := []struct {
Expand All @@ -1991,7 +1991,7 @@ func TestGetDataPoints(t *testing.T) {
name: "Int gauge",
isPrometheusMetrics: false,
metric: generateTestGaugeMetric("foo", intValueType),
expectedDatapointSlice: numberDataPointSlice{normalDeltraMetricMetadata, pmetric.NumberDataPointSlice{}},
expectedDatapointSlice: numberDataPointSlice{normalDeltaMetricMetadata, pmetric.NumberDataPointSlice{}},
expectedAttributes: map[string]any{"label1": "value1"},
},
{
Expand Down Expand Up @@ -2019,7 +2019,7 @@ func TestGetDataPoints(t *testing.T) {
name: "Summary from SDK",
isPrometheusMetrics: false,
metric: generateTestSummaryMetric("foo"),
expectedDatapointSlice: summaryDataPointSlice{normalDeltraMetricMetadata, pmetric.SummaryDataPointSlice{}},
expectedDatapointSlice: summaryDataPointSlice{normalDeltaMetricMetadata, pmetric.SummaryDataPointSlice{}},
expectedAttributes: map[string]any{"label1": "value1"},
},
{
Expand Down
4 changes: 2 additions & 2 deletions exporter/awsemfexporter/emf_exporter_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -71,7 +71,7 @@ func TestConsumeMetricsWithNaNValues(t *testing.T) {
generateFunc func(string) pmetric.Metrics
}{
{
"histograme-with-nan",
"histogram-with-nan",
generateTestHistogramMetricWithNaNs,
}, {
"gauge-with-nan",
Expand Down Expand Up @@ -110,7 +110,7 @@ func TestConsumeMetricsWithInfValues(t *testing.T) {
generateFunc func(string) pmetric.Metrics
}{
{
"histograme-with-inf",
"histogram-with-inf",
generateTestHistogramMetricWithInfs,
}, {
"gauge-with-inf",
Expand Down
4 changes: 2 additions & 2 deletions exporter/awsemfexporter/grouped_metric_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -454,7 +454,7 @@ func TestAddKubernetesWrapper(t *testing.T) {
dockerObj := struct {
ContainerID string `json:"container_id"`
}{
ContainerID: "Container mccontainter the third",
ContainerID: "Container mccontainer the third",
}
expectedCreatedObj := struct {
ContainerName string `json:"container_name"`
Expand All @@ -469,7 +469,7 @@ func TestAddKubernetesWrapper(t *testing.T) {
}

inputs := make(map[string]string)
inputs["container_id"] = "Container mccontainter the third"
inputs["container_id"] = "Container mccontainer the third"
inputs["container"] = "container mccontainer"
inputs["NodeName"] = "hosty de la host"
inputs["PodId"] = "Le id de Pod"
Expand Down
2 changes: 1 addition & 1 deletion exporter/awskinesisexporter/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,7 @@
<!-- end autogenerated section -->

The kinesis exporter currently exports dynamic encodings to the configured kinesis stream.
The exporter relies heavily on the kinesis.PutRecords api to reduce network I/O and and reduces records into smallest atomic representation
The exporter relies heavily on the kinesis.PutRecords api to reduce network I/O and reduces records into smallest atomic representation
to avoid hitting the hard limits placed on Records (No greater than 1Mb).
This producer will block until the operation is done to allow for retryable and queued data to help during high loads.

Expand Down
2 changes: 1 addition & 1 deletion exporter/awskinesisexporter/internal/batch/batch.go
Original file line number Diff line number Diff line change
Expand Up @@ -101,7 +101,7 @@ func (b *Batch) AddRecord(raw []byte, key string) error {
return nil
}

// Chunk breaks up the iternal queue into blocks that can be used
// Chunk breaks up the internal queue into blocks that can be used
// to be written to he kinesis.PutRecords endpoint
func (b *Batch) Chunk() (chunks [][]types.PutRecordsRequestEntry) {
// Using local copies to avoid mutating internal data
Expand Down
Loading
Loading