Skip to content

Commit

Permalink
Linting and spelling
Browse files Browse the repository at this point in the history
  • Loading branch information
rcastley committed Feb 16, 2024
1 parent a9bd77e commit bfb51ad
Show file tree
Hide file tree
Showing 7 changed files with 47 additions and 52 deletions.
Original file line number Diff line number Diff line change
Expand Up @@ -57,6 +57,7 @@ If you have completed a Splunk Observability workshop using this EC2 instance, p
``` bash
helm delete splunk-otel-collector
```

{{% /notice %}}

## 2. The Splunk OpenTelemetry Collector
Expand All @@ -69,7 +70,7 @@ The Splunk OpenTelemetry Collector is the core component of instrumenting infras
* Host and Application logs

To get Observability signals (**Metrics, Traces** and **Logs**) into the **Splunk Observability Cloud** we need to add an OpenTelemetry Collector to our Kubernetes cluster.
For this workshop we will be using the Splunk Kubernetes Helm Chart for the Opentelemetry collector and install the collector in `Operator` mode as this is required for Zero-config.
For this workshop, we will be using the Splunk Kubernetes Helm Chart for the Opentelemetry collector and installing the collector in `Operator` mode as this is required for Zero-config.

## 3. Install the OpenTelemetry Collector using Helm

Expand Down Expand Up @@ -101,9 +102,9 @@ Update Complete. ⎈Happy Helming!⎈

The Splunk Observability Cloud offers wizards in the **Splunk Observability Suite** UI to walk you through the setup of the Collector on Kubernetes, but in the interest of time, we will use a setup created earlier. As we want the auto instrumentation to be available, we will install the OpenTelemetry Collector with the OpenTelemetry Collector Helm chart with some additional options:

* --set="operator.enabled=true" - this will install the Opentelemetry operator, that will be used to handle auto instrumentation
* --set="certmanager.enabled=true" - This will install the required certificate manager for the operator.
* --set="splunkObservability.profilingEnabled=true" - This enabled Code profiling via the operator
* `--set="operator.enabled=true"` - this will install the Opentelemetry operator, that will be used to handle auto instrumentation
* `--set="certmanager.enabled=true"` - This will install the required certificate manager for the operator.
* `--set="splunkObservability.profilingEnabled=true"` - This enabled Code profiling via the operator

To install the collector run the following commands, do **NOT** edit this:

Expand Down Expand Up @@ -262,7 +263,7 @@ configmap/scriptfile created
On rare occasions, you may encounter the above error at this point. please log out and back in, and verify the above env variables are all set correctly. If not please, please contact your instructor.
{{% /notice %}} -->
At this point we can verify the deployment by checking if the Pods are running, Not that these containers need to be downloaded and started, this may take a minute or so.
At this point, we can verify the deployment by checking if the Pods are running, Not that these containers need to be downloaded and started, this may take a minute or so.
{{< tabs >}}
{{% tab title="kubectl get pods" %}}
Expand Down Expand Up @@ -301,7 +302,7 @@ Once they are running, the application will take a few minutes to fully start up

## 5. Verify the local Docker Repository

Once we have tested our Zero Auto-Config Instrumentation the existing containers, we are going to build our own containers to show some of the additional instrumentation features of Opentelemetry Java. Only then we will touch the config files or the source code. Once we build these containers, Kubernetes will need to pull these new images from somewhere. To enable this we have created a local repository to store these new containers, so Kubernetes can pull the images locally.
Once we have tested our Zero Auto-Config Instrumentation in the existing containers, we are going to build our containers to show some of the additional instrumentation features of Opentelemetry Java. Only then we will touch the config files or the source code. Once we build these containers, Kubernetes will need to pull these new images from somewhere. To enable this we have created a local repository to store these new containers, so Kubernetes can pull the images locally.

We can see if the repository is up and running by checking the inventory with the below command:

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -6,22 +6,22 @@ weight: 20

## 1. Verify the installation by checking Metrics and Logs

Once the installation is completed, you can login into the **Splunk Observability Cloud** with the URL provided by the Instructor.
Once the installation is completed, you can log in to the **Splunk Observability Cloud** with the URL provided by the Instructor.

First, Navigate to **Kubernetes Navigator** view in the **Infrastructure** ![infra](../images/infra-icon.png?classes=inline&height=25px) section to see the metrics from your cluster in the **K8s nodes** pane. Once you are in the Kubernetes Navigator view, change the *Time* filter to the last 15 Minutes (-15m) to focus on the latest data.
First, Navigate to the **Kubernetes Navigator** view in the **Infrastructure** ![infra](../images/infra-icon.png?classes=inline&height=25px) section to see the metrics from your cluster in the **K8s nodes** pane. Once you are in the Kubernetes Navigator view, change the *Time* filter to the last 15 Minutes (-15m) to focus on the latest data.

Select your own cluster with the regular filter option at the top of the Navigator and a filter `k8s.cluster.name` **(1)**. Type or select the cluster name of your workshop instance (you can get the unique part from your cluster name by using the `INSTANCE` from the output from the shell script you ran earlier). (You can also select your cluster by clicking on its image in the cluster pane.)
Select your cluster with the regular filter option at the top of the Navigator and a filter `k8s.cluster.name` **(1)**. Type or select the cluster name of your workshop instance (you can get the unique part from your cluster name by using the `INSTANCE` from the output from the shell script you ran earlier). (You can also select your cluster by clicking on its image in the cluster pane.)
You should now only have your cluster visible **(2)**.

![Navigator](../images/navigator.png)

You should see metrics **(3)** of your cluster and the log events **(4)** chart should start to be populated with log line events coming from your cluster. Click on one of the bars to peek at the log lines coming in from you cluster.
You should see metrics **(3)** of your cluster and the log events **(4)** chart should start to be populated with log line events coming from your cluster. Click on one of the bars to peek at the log lines coming in from your cluster.

![logs](../images/k8s-peek-at-logs.png)

Also, a `Mysql` pane **(5)** should appear, when you click on that pane, you can see the MySQL related metrics from your database.
Also, a `Mysql` pane **(5)** should appear, when you click on that pane, you can see the MySQL-related metrics from your database.

![mysql metrics](../images/mysql-metrics.png)
![MySQL metrics](../images/mysql-metrics.png)

Once you see data flowing in from your host (`metrics and logs`) and MySQL shows `metrics` as well we can move on to the actual PetClinic application.

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -43,7 +43,7 @@ spec:

## 2. Setting up Java auto instrumentation on the api-gateway pod

Lets look how zero-config works with a single pod, the `api-gateway`. If you enable Zero configuration for a pod, the Collector will attach an init-Container to your existing pod, and restart the pod to activate it.
Let's look at how zero-config works with a single pod, the `api-gateway`. If you enable Zero configuration for a pod, the Collector will attach an init-Container to your existing pod, and restart the pod to activate it.

To show what happens when you enable Auto instrumentation, let's do a *For & After* of the content of a pod, the `api-gateway` in this case:

Expand Down Expand Up @@ -87,18 +87,18 @@ kubectl describe pods api-gateway |grep Image:

Next to the original pod from before, you should see an initContainer named **opentelemetry-auto-instrumentation**. (If you get two api-gateway containers, the original one is still terminating, so give it a few seconds):


{{% tab title="Example output" %}}

```text
Image: ghcr.io/signalfx/splunk-otel-java/splunk-otel-java:v1.30.0
Image: quay.io/phagen/spring-petclinic-api-gateway:0.0.2
```

{{% /tab %}}

## 3. Enable Java auto instrumentation on all pods

Now lets patch all other services so we can see the full interaction between all services with `app.kubernetes.io/part-of=spring-petclinic` as the inject annotation.
Now let's patch all other services so we can see the full interaction between all services with `app.kubernetes.io/part-of=spring-petclinic` as the inject annotation.
remember: **This automatically causes pods to restart.**

Note, there will be no change for the *config-server, discovery-server, admin-server & api-gateway* as we patched these earlier.
Expand Down Expand Up @@ -129,14 +129,13 @@ deployment.apps/api-gateway patched (no change)

## 3. Check the result in Splunk APM

Once the containers are patched they will be restarted, let's go back to the **Splunk Observability Cloud** with the URL provided by the Instructor to check our cluster in the Kubernetes Navigator.
Once the containers are patched they will be restarted, let's go back to the **Splunk Observability Cloud** with the URL provided by the Instructor to check our cluster in the Kubernetes Navigator.

After a couple of minuted or so you should see that the Pods are being restarted by the operator and the Zero config container will be added.
This will look similar like the Screen shot below:
After a couple of minutes or so you should see that the Pods are being restarted by the operator and the Zero config container will be added. This will look similar to the screenshot below:

![restart](../images/k8s-navigator-restarted-pods.png)

Wait for the pods to turn green again.(You may want to refresh the screen, then navigate to the **APM** ![APM](../images/apm-icon.png?classes=inline&height=25px) section to look at the information provide by the traces generated from your service in the **Explore** Pane. Use the filter option and change the *environment* filter **(1)** and search for the name of your workshop instance in the drop down box, it should be the [INSTANCE]-workshop. (where `INSTANCE` is the value from the shell script you run earlier). Make sure it is the only one selected.
Wait for the pods to turn green again (you may want to refresh the screen), then navigate to the **APM** ![APM](../images/apm-icon.png?classes=inline&height=25px) section to look at the information provided by the traces generated from your service in the **Explore** Pane. Use the filter option to change the *environment* filter **(1)** and search for the name of your workshop instance in the dropdown box, it should be the [INSTANCE]-workshop. (where `INSTANCE` is the value from the shell script you run earlier). Make sure it is the only one selected.

![apm](../images/zero-config-first-services-overview.png)

Expand All @@ -145,21 +144,20 @@ You should see the name **(2)** of the api-gateway service and metrics in the La
Next, click on **Explore** **(3)** to see the services in the automatically generated dependency map and select the api-gateway service.
![apm map](../images/zero-config-first-services-map.png)

The Example above shows all the interaction between the all our services, Your may still be showing the map in the interim state as it will take the Petclinic Microservice application a few minutes to start up and fully synchronize to make your map to look like t he one above.
reducing the time will help, if you pick a Custom time of 2 minutes, the initial startup related errors (Red Dots) will disappear from the view.)
The example above shows all the interactions between all our services. You may still be showing the map in the interim state as it will take the Petclinic Microservice application a few minutes to start up and fully synchronize to make your map to look like the one above. Reducing the time will help, if you pick a custom time of 2 minutes, the initial startup-related errors (Red Dots) will disappear from the view.

In the meantime let's examine the metrics that are available for each service that is instrumented and visit the request, error, and duration (RED) metrics Dashboard

## 5. Examine default R.E.D. Metrics

Splunk APM provides a set of built-in dashboards that present charts and visualized metrics to help you see problems occurring in real time and quickly determine whether the problem is associated with a service, a specific endpoint, or the underlying infrastructure. To look at this dashboard for the selected `api-gateway`, make sure you have the `api-gateway` service selected in the Dependency map as show above, then click on the ***View Dashboard** Link **(1)** at the top of the right hand pane.
Splunk APM provides a set of built-in dashboards that present charts and visualized metrics to help you see problems occurring in real time and quickly determine whether the problem is associated with a service, a specific endpoint, or the underlying infrastructure. To look at this dashboard for the selected `api-gateway`, make sure you have the `api-gateway` service selected in the Dependency map as show above, then click on the ***View Dashboard** Link **(1)** at the top of the right-hand pane.

This will bring you to the services dashboard:

![metrics dashboard](../images/zero-config-first-services-metrics.png)

This dashboard, that is available for each of your instrumented services, offers an overview of the key `request, error, and duration (RED)` metrics based on Monitoring MetricSets created from endpoint spans for your services, endpoints, and Business Workflows. They also present related host and Kubernetes metrics to help you determine whether problems are related to the underlying infrastructure, as in the above image.
As the dashboards allow you to go back in time with the *Time picker* window **(1)**, its the perfect spot to identify behaviour you wish to be alerted on, and with a click on one of the bell icons **(2)** available in each chart, you can set up an alert to do just that.
This dashboard, which is available for each of your instrumented services, offers an overview of the key `request, error, and duration (RED)` metrics based on Monitoring MetricSets created from endpoint spans for your services, endpoints, and Business Workflows. They also present related host and Kubernetes metrics to help you determine whether problems are related to the underlying infrastructure, as in the above image.
As the dashboards allow you to go back in time with the *Time picker* window **(1)**, it's the perfect spot to identify the behavior you wish to be alerted on, and with a click on one of the bell icons **(2)** available in each chart, you can set up an alert to do just that.

If you scroll down the page, you get host and Kubernetes metrics related to your service as well.
Let's move on to look at some of the traces generated by the Zero Config Auto instrumentation.
Expand Down
Loading

0 comments on commit bfb51ad

Please sign in to comment.