Skip to content

Commit

Permalink
Merge pull request #1558 from splunk/repo-sync
Browse files Browse the repository at this point in the history
Pulling refs/heads/main into main
  • Loading branch information
aurbiztondo-splunk authored Oct 4, 2024
2 parents 2957360 + fdd4a6a commit d14bf5d
Show file tree
Hide file tree
Showing 87 changed files with 322 additions and 1,654 deletions.
Binary file modified _images/images-slo/custom-metric-slo-scenario.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added _images/logs/LogObserverEnhancementsUI.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified _images/logs/lo-openinsplunk.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
31 changes: 23 additions & 8 deletions _includes/logs/query-logs.rst
Original file line number Diff line number Diff line change
@@ -1,12 +1,27 @@
#. Navigate to :guilabel:`Log Observer`. In the content control bar, enter a time range in the time picker if you know it.
#. Select :guilabel:`Index` next to :guilabel:`Saved Queries`, then select the indexes you want to query. If you want to search your Splunk platform (Splunk Cloud Platform or Splunk Enterprise) data, select the integration for the appropriate Splunk platform instance first, then select which index you want to query in Log Observer. You can only query indexes from one Splunk platform instance or Splunk Observability Cloud instance at a time. You can only query Splunk platform indexes if you have the appropriate role and permissions in the Splunk platform instance. Select :guilabel:`Apply`.
#. In the content control bar next to the index picker, select :guilabel:`Add Filter`.
#. To search on a keyword, select the :guilabel:`Keyword` tab, type the keyword or phrase you want to search on, then press Enter. If you want to search on a field, select the :guilabel:`Fields` tab, enter the field name, then press Enter.
#. To continue adding keywords or fields to the search, select :guilabel:`Add Filter`.
#. Review the top values for your query on the the :guilabel:`Fields` panel on right. This list includes the count of each value in the log records. To include log records with a particular value, select the field name, then select ``=``. To exclude log records with a particular value from your results, select the field name, then select ``!=``. To see the full list of values and distribution for this field, select :guilabel:`Explore all values`.
#. Optionally, if you are viewing Splunk platform (Splunk Cloud Platform or Splunk Enterprise) data, you can open your query results in the Splunk platform to use SPL to further filter or work with the query results. You must have an account in Splunk platform. To open the log results in the Splunk platform, select the :guilabel:`Open in Splunk platform` icon at the top of the Logs table.
1. Navigate to :guilabel:`Log Observer`. Upon opening, Log Observer runs an initial search of all indexes you have access to and returns the most recent 150,000 logs. The search then defaults to Pause in order to save Splunk Virtual Compute (SVC) resources. Control your SVC resources, which impact performance and cost, by leaving your search on Pause when you are not monitoring incoming logs, and select Play when you want to see more incoming logs.

.. image:: /_images/logs/LogObserverEnhancementsUI.png
:width: 90%
:alt: The Log Observer UI is displayed.


2. In the content control bar, enter a time range in the time picker if you want to see logs from a specific historical period. To select a time range, you must select :guilabel:`Infinite` from the :guilabel:`Search Records` field in step 5 below. When you select :guilabel:`150,000`, Log Observer returns only the most recent 150,000 logs regardless of the time range you select.

3. Select :guilabel:`Index` next to :guilabel:`Saved Queries`, then select the indexes you want to query. When you do not select an index, Log Observer runs your query on all indexes to which you have access. If you want to search your Splunk platform (Splunk Cloud Platform or Splunk Enterprise) data, select the integration for the appropriate Splunk platform instance first, then select which index you want to query in Log Observer. You can query indexes from only one Splunk platform instance or Splunk Observability Cloud instance at a time. You can query Splunk platform indexes only if you have the appropriate role and permissions.

4. In the content control bar next to the index picker, select :guilabel:`Add Filter`. Select the :guilabel:`Keyword` tab to search on a keyword or phrase. Select the :guilabel:`Fields` tab to search on a field. Then press Enter. To continue adding keywords or fields to the search, select :guilabel:`Add Filter` again.

5. Next, select :guilabel:`Unlimited` or :guilabel:`150,000` from the :guilabel:`Search Records` field to determine the number of logs you want to return on a single search. Select :guilabel:`150,000` to optimize your Splunk Virtual Compute (SVC) resources and control performance and cost. However, only the most recent 150,000 logs display. To see a specific time range, you must select :guilabel:`Infinite`.

6. To narrow your search, use the :guilabel:`Group by` drop-down list to select the field or fields by which you want to group your results, then select :guilabel:`Apply`. To learn more about aggregations, see :ref:`logs-aggregations`.

7. Select :guilabel:`Run search`.

8. Review the top values for your query on the the :guilabel:`Fields` panel on right. This list includes the count of each value in the log records. To include log records with a particular value, select the field name, then select ``=``. To exclude log records with a particular value from your results, select the field name, then select ``!=``. To see the full list of values and distribution for this field, select :guilabel:`Explore all values`.

9. Optionally, if you are viewing Splunk platform data, you can open your query results in the Splunk platform and use SPL to further query the resulting logs. You must have an account in Splunk platform. To open the log results in the Splunk platform, select the :guilabel:`Open in Splunk platform` icon at the top of the Logs table.

.. image:: /_images/logs/lo-openinsplunk.png
:width: 100%
:width: 90%
:alt: The Open in Splunk platform icon is at the top, right-hand side of the Logs table.

3 changes: 2 additions & 1 deletion admin/references/data-retention.rst
Original file line number Diff line number Diff line change
Expand Up @@ -87,7 +87,8 @@ The following table shows the retention time period for each data type in APM.
Data retention in Log Observer
============================================

The retention period for indexed logs in Splunk Log Observer is 30 days. If you send logs to S3 through the Infinite Logging feature, then the data retention period depends on the policy you purchased for your Amazon S3 bucket. To learn how to set up Infinite Logging rules, see :ref:`logs-infinite`.
The retention period for indexed logs in Splunk Log Observer is 30 days.


.. _oncall-data-retention:

Expand Down
2 changes: 1 addition & 1 deletion admin/subscription-usage/subscription-usage-overview.rst
Original file line number Diff line number Diff line change
Expand Up @@ -64,6 +64,6 @@ Learn more at :ref:`per-product-limits` and the following docs:

* Data ingest can be limited at the source by Cloud providers. You can track this with the metric ``sf.org.num.<cloudprovidername>ServiceClientCallCountThrottles``.

* :ref:`Log Observer Connect limits <lo-connect-limits>` and :ref:`Log Observer limits <logs-limits>`
* :ref:`Log Observer Connect limits <lo-connect-limits>`

* :ref:`System limits for Splunk RUM <rum-limits>`
3 changes: 2 additions & 1 deletion admin/subscription-usage/synthetics-usage.rst
Original file line number Diff line number Diff line change
Expand Up @@ -25,7 +25,8 @@ Splunk Synthetic Monitoring offers metrics you can use to track your subscriptio
- Total number of synthetic runs by organization. To filter by test type:
- ``test_type=browser``
- ``test_type=API``
- ``test_type=uptime``
- ``test_type=http``
- ``test_type=port``


See also
Expand Down
29 changes: 15 additions & 14 deletions alerts-detectors-notifications/slo/create-slo.rst
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,7 @@ Follow these steps to create an SLO.
#. From the landing page of Splunk Observability Cloud, go to :strong:`Detectors & SLOs`.
#. Select the :strong:`SLOs` tab.
#. Select :guilabel:`Create SLO`.
#. Configure the service level indicator (SLI) for your SLO.
#. Configure the service level indicator (SLI) for your SLO. You can use a service or any metric of your choice as the system health indicator.

To use a service as the system health indicator for your SLI configuration, follow these steps:

Expand All @@ -46,21 +46,22 @@ Follow these steps to create an SLO.
* - :guilabel:`Filters`
- Enter any additional dimension names and values you want to apply this SLO to. Alternatively, use the ``NOT`` filter, represented by an exclamation point ( ! ), to exclude any dimension values from this SLO configuration.

To use a custom metric as the system health indicator for your SLI configuration, follow these steps:
To use a metric of your choice as the system health indicator for your SLI configuration, follow these steps:

.. list-table::
:header-rows: 1
:widths: 40 60
:width: 100%
#. For the :guilabel:`Metric type` field, select :guilabel:`Custom metric` from the dropdown menu. The SignalFlow editor appears.
#. In the SignalFlow editor, you can see the following code sample:

* - :strong:`Field name`
- :strong:`Actions`
* - :guilabel:`Metric type`
- Select :guilabel:`Custom metric` from the dropdown menu
* - :guilabel:`Good events (numerator)`
- Search for the metric you want to use for the success request count
* - :guilabel:`Total events (denominator)`
- Search for the metric you want to use for the total request count
.. code-block:: python
G = data('good.metric', filter=filter('sf_error', 'false'))
T = data('total.metric')
* Line 1 defines ``G`` as a data stream of ``good.metric`` metric time series (MTS). The SignalFlow ``filter()`` function queries for a collection of MTS with value ``false`` for the ``sf_error`` dimension. The filter distinguishes successful requests from total requests, making ``G`` the good events variable.
* Line 2 defines ``T`` as a data stream ``total.metric`` MTS. ``T`` is the total events variable.

Replace the code sample with your own SignalFlow program. You can define good events and total events variables using any metric and supported SignalFlow function. For more information, see :new-page:`Analyze data using SignalFlow <https://dev.splunk.com/observability/docs/signalflow>` in the Splunk Observability Cloud Developer Guide.

#. Select appropriate variable names for the :guilabel:`Good events (numerator)` and :guilabel:`Total events (denominator)` dropdown menus.

.. note:: Custom metric SLO works by calculating the percentage of successful requests over a given compliance period. This calculation works better for counter and histogram metrics than for gauge metrics. Gauge metrics are not suitable for custom metric SLO, so you might get confusing data when selecting gauge metrics in your configuration.

Expand Down
39 changes: 13 additions & 26 deletions alerts-detectors-notifications/slo/custom-metric-scenario.rst
Original file line number Diff line number Diff line change
Expand Up @@ -17,32 +17,22 @@ Use custom metric as service level indicator (SLI)

From the :guilabel:`Detectors & SLOs` page, Kai configures the SLI and sets up a target for their SLO. Kai follows these steps:

#. Kai wants to use custom metrics as the system health indicators, so they select the :guilabel:`Custom metric` from the :guilabel:`Metric type` menu.
#. Kai enters the custom metrics they want to measure in the following fields:
#. Kai wants to use a Synthetics metric as the system health indicators, so they select the :guilabel:`Custom metric` from the :guilabel:`Metric type` menu.
#. Kai enters following program into the SignalFlow editor:

.. list-table::
:header-rows: 1
:widths: 10 20 30 40
.. code-block:: python
* - Field
- Metric name
- Filters
- Description
G = data('synthetics.run.count', filter=filter('test', 'Monitoring Services - Emby check') and filter('success', 'true'))
T = data('synthetics.run.count', filter=filter('test', 'Monitoring Services - Emby check'))
* - :guilabel:`Good events (numerator)`
- :strong:`synthetics.run.count`
- Kai adds the following filters for this metric:

* :strong:`test = Emby check`
* :strong:`success = true`
- Kai uses the :strong:`success = true` filter to count the number of successful requests for the Emby service on the Buttercup Games website.
Kai defines variables ``G`` and ``T`` as two streams of ``synthetics.run.count`` metric time series (MTS) measuring the health of requests sent to the Emby service. To distinguish between the two data streams, Kai applies an additional filter on the ``success`` dimension in the definition for ``G``. This filter queries for a specific collection of MTS that track successful requests for the Emby service. In Kai's SignalFlow program, ``G`` is a data stream of good events and ``T`` is a data stream of total events.

* - :guilabel:`Total events (denominator)`
- :strong:`synthetics.run.count`
- Kai adds the following filter for this metric:
.. image:: /_images/images-slo/custom-metric-slo-scenario.png
:width: 100%
:alt: This image shows Kai's SLO configuration using the ``synthetics.run.count`` metric and appropriate filters.

* :strong:`test = Emby check`
- Kai uses the same metric name and the :strong:`test = Emby check` filter to track the same Synthetics Browser test. However, Kai doesn't include the :strong:`success = true` dimension filter in order to count the number of total requests for the Emby service on the Buttercup Games website.

#. Kai assigns ``G`` to the :guilabel:`Good events (numerator)` dropdown menu and ``T`` to the :guilabel:`Total events (denominator)` dropdown menu.

#. Kai enters the following fields to define a target for their SLO:

Expand All @@ -64,11 +54,6 @@ From the :guilabel:`Detectors & SLOs` page, Kai configures the SLI and sets up a

#. Kai subscribes to receive an alert whenever there is a breach event for the SLO target.

.. image:: /_images/images-slo/custom-metric-slo-scenario.png
:width: 100%
:alt: This image shows Kai's SLO configuration using the ``synthetics.run.count`` metric and appropriate filters.


Summary
=======================

Expand All @@ -80,3 +65,5 @@ Learn more
For more information about creating an SLO, see :ref:`create-slo`.

For more information about the Synthetics Browser test, see :ref:`browser-test`.

For more information on SignalFlow, see :new-page:`Analyze data using SignalFlow <https://dev.splunk.com/observability/docs/signalflow>` in the Splunk Observability Cloud Developer Guide.
2 changes: 1 addition & 1 deletion apm/apm-scenarios/troubleshoot-business-workflows.rst
Original file line number Diff line number Diff line change
Expand Up @@ -81,4 +81,4 @@ Learn more

* For details about business workflows, see :ref:`apm-workflows`.
* For details about using Related Content, see :ref:`get-started-relatedcontent`.
* For more information about using Splunk Log Observer to detect the source of problems, see :ref:`get-started-logs`.
* For more information about using Splunk Log Observer Connect to detect the source of problems, see :ref:`logs-intro-logconnect`.
2 changes: 1 addition & 1 deletion apm/apm-scenarios/troubleshoot-tag-spotlight.rst
Original file line number Diff line number Diff line change
Expand Up @@ -81,4 +81,4 @@ Learn more

* For details about Tag Spotlight, see :ref:`apm-tag-spotlight`.
* For details about using Related Content, see :ref:`get-started-relatedcontent`.
* For more information about using Splunk Log Observer to detect the source of problems, see :ref:`get-started-logs`.
* For more information about using Splunk Log Observer Connect to detect the source of problems, see :ref:`logs-intro-logconnect`.
2 changes: 2 additions & 0 deletions apm/intro-to-apm.rst
Original file line number Diff line number Diff line change
Expand Up @@ -8,6 +8,8 @@ Introduction to Splunk APM

Collect :ref:`traces and spans<apm-traces-spans>` to monitor your distributed applications with Splunk Application Performance Monitoring (APM). A trace is a collection of actions, or spans, that occur to complete a transaction. Splunk APM collects and analyzes every span and trace from each of the services that you have connected to Splunk Observability Cloud to give you full-fidelity access to all of your application data.

To keep up to date with changes in APM, see the Splunk Observability Cloud :ref:`release notes <release-notes-overview>`.

For scenarios using Splunk APM, see :ref:`apm-scenarios-intro`.

.. raw:: html
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -159,7 +159,6 @@ The instrumentation uses the underscore character as separator for field names (
- ``service_version`` to ``service.version``
- ``deployment_environment`` to ``deployment.environment``

See :ref:`logs-processors` for more information on how to define log transformation rules.

ILogger
-------------------------
Expand Down
2 changes: 1 addition & 1 deletion gdi/get-data-in/gdi-guide/additional-resources.rst
Original file line number Diff line number Diff line change
Expand Up @@ -36,5 +36,5 @@ See the following resources for more information about each component in Splunk

- :ref:`get-started-apm`
- :ref:`get-started-infrastructure`
- :ref:`get-started-logs`
- :ref:`logs-intro-logconnect`
- :ref:`get-started-rum`
2 changes: 1 addition & 1 deletion gdi/get-data-in/get-data-in.rst
Original file line number Diff line number Diff line change
Expand Up @@ -21,7 +21,7 @@ Use Splunk Observability Cloud to achieve full-stack observability of all your d
- :ref:`Splunk Infrastructure Monitoring <infrastructure-infrastructure>`
- :ref:`Splunk Application Performance Monitoring (APM) <get-started-apm>`
- :ref:`Splunk Real User Monitoring (RUM) <rum-gdi>`
- :ref:`Splunk Log Observer <get-started-logs>` and :ref:`Log Observer Connect <logs-intro-logconnect>`
- :ref:`Splunk Log Observer Connect <logs-intro-logconnect>`

This guide provides four chapters that guide you through the process of setting up each component of Splunk Observability Cloud.

Expand Down
5 changes: 1 addition & 4 deletions gdi/monitors-cache/opcache.rst
Original file line number Diff line number Diff line change
Expand Up @@ -6,10 +6,7 @@ OPcache
.. meta::
:description: Use this Splunk Observability Cloud integration for the Collectd OPcache monitor. See benefits, install, configuration, and metrics

The Splunk Distribution of OpenTelemetry Collector uses the Smart Agent receiver with the
``collectd/opcache`` monitor type to retrieve metrics from OPcache using
the ``opcache_get_status()`` function, which improves PHP performance by
storing precompiled script bytecode in shared memory.
The Splunk Distribution of the OpenTelemetry Collector uses the Smart Agent receiver with the ``collectd/opcache`` monitor type to retrieve metrics from OPcache using the ``opcache_get_status()`` function, which improves PHP performance by storing precompiled script bytecode in shared memory.

This integration is available on Kubernetes and Linux.

Expand Down
6 changes: 1 addition & 5 deletions gdi/monitors-databases/apache-spark.rst
Original file line number Diff line number Diff line change
Expand Up @@ -17,11 +17,7 @@ endpoints:
- Mesos
- Hadoop YARN

This collectd plugin is not compatible with Kubernetes cluster mode. You need
to select distinct monitor configurations and discovery rules
for primary and worker processes. For the primary configuration, set
``isMaster`` to ``true``. When you run Apache Spark on Hadoop YARN, this
integration can only report application metrics from the primary node.
This collectd plugin is not compatible with Kubernetes cluster mode. You need to select distinct monitor configurations and discovery rules for primary and worker processes. For the primary configuration, set ``isMaster`` to ``true``. When you run Apache Spark on Hadoop YARN, this integration can only report application metrics from the primary node.

This integration is only available on Linux.

Expand Down
6 changes: 1 addition & 5 deletions gdi/monitors-databases/etcd.rst
Original file line number Diff line number Diff line change
Expand Up @@ -6,11 +6,7 @@ etcd server
.. meta::
:description: Use this Splunk Observability Cloud integration for the etcd monitor. See benefits, install, configuration, and metrics

The Splunk Distribution of OpenTelemetry Collector uses the Smart Agent receiver with the
etcd monitor type to report etcd server metrics under the ``/metrics``
path on its client port. Optionally, you can ediy location using
``--listen-metrics-urls``. This integration only collects metrics from
the Prometheus endpoint.
The Splunk Distribution of the OpenTelemetry Collector uses the Smart Agent receiver with the etcd monitor type to report etcd server metrics under the ``/metrics`` path on its client port. Optionally, you can edit the location using ``--listen-metrics-urls``. This integration only collects metrics from the Prometheus endpoint.

Benefits
--------
Expand Down
Loading

0 comments on commit d14bf5d

Please sign in to comment.