diff --git a/_includes/gdi/application-receiver-table-deprecated.rst b/_includes/gdi/application-receiver-table-deprecated.rst new file mode 100644 index 000000000..2de64e9af --- /dev/null +++ b/_includes/gdi/application-receiver-table-deprecated.rst @@ -0,0 +1,25 @@ +* :ref:`asp-dot-net` +* :ref:`amazon-ecs-metadata` +* :ref:`chrony` +* :ref:`collectd-df` +* :ref:`consul` +* :ref:`disk` +* :ref:`exec-input` +* :ref:`load` +* :ref:`interface` +* :ref:`java-monitor` +* :ref:`kong` +* :ref:`kubernetes-cluster` +* :ref:`kube-controller-manager` +* :ref:`kubelet-stats` +* :ref:`microsoft-dotnet` +* :ref:`mongodb` +* :ref:`mongodb-atlas` +* :ref:`mysql` +* :ref:`nagios` +* :ref:`postgresql` +* :ref:`redis` +* :ref:`signalfx-forwarder` +* :ref:`statsd` +* :ref:`telegraf-win-perf-counters` + diff --git a/_includes/gdi/application-receiver-table.rst b/_includes/gdi/application-receiver-table.rst index 549214a06..d19862c9d 100644 --- a/_includes/gdi/application-receiver-table.rst +++ b/_includes/gdi/application-receiver-table.rst @@ -8,10 +8,6 @@ - :strong:`Provides metrics` - :strong:`Provides traces` - * - :ref:`amazon-ecs-metadata` - - :strong:`X` - - - * - :ref:`activemq` - :strong:`X` - @@ -40,10 +36,6 @@ - :strong:`X` - - * - :ref:`asp-dot-net` - - :strong:`X` - - - * - :ref:`appmesh` - :strong:`X` - @@ -60,10 +52,6 @@ - :strong:`X` - - * - :ref:`chrony` - - :strong:`X` - - - * - :ref:`cloudfoundry-firehose-nozzle` - :strong:`X` - @@ -72,18 +60,10 @@ - :strong:`X` - - * - :ref:`collectd-df` - - :strong:`X` - - - * - :ref:`collectd-uptime` - :strong:`X` - - * - :ref:`consul` - - :strong:`X` - - - * - :ref:`conviva` - :strong:`X` - @@ -104,10 +84,6 @@ - :strong:`X` - - * - :ref:`disk` - - :strong:`X` - - - * - :ref:`dns` - :strong:`X` - @@ -124,14 +100,6 @@ - :strong:`X` - - * - :ref:`etcd` - - :strong:`X` - - - - * - :ref:`exec-input` - - :strong:`X` - - - * - :ref:`expvar` - :strong:`X` - @@ -160,10 +128,6 @@ - :strong:`X` - - * - :ref:`health-checker` - - :strong:`X` - - - * - :ref:`heroku` - :strong:`X` - @@ -180,18 +144,10 @@ - - - * - :ref:`load` - - :strong:`X` - - - * - :ref:`http` - :strong:`X` - - * - :ref:`interface` - - :strong:`X` - - :strong:`X` - * - :ref:`get-started-istio` - :strong:`X` - :strong:`X` @@ -200,10 +156,6 @@ - :strong:`X` - - * - :ref:`java-monitor` - - :strong:`X` - - - * - :ref:`jenkins` - :strong:`X` - @@ -220,30 +172,14 @@ - :strong:`X` - - * - :ref:`kong` - - :strong:`X` - - - * - :ref:`kubernetes-apiserver` - :strong:`X` - - * - :ref:`kubernetes-cluster` - - :strong:`X` - - - - * - :ref:`kube-controller-manager` - - :strong:`X` - - - * - :ref:`kubernetes-events` - :strong:`X` - - * - :ref:`kubelet-stats` - - :strong:`X` - - - * - :ref:`kubernetes-proxy` - :strong:`X` - @@ -272,10 +208,6 @@ - :strong:`X` - - * - :ref:`microsoft-dotnet` - - :strong:`X` - - - * - :ref:`get-started-dotnet-otel` - :strong:`X` - @@ -288,22 +220,6 @@ - :strong:`X` - - * - :ref:`mongodb` - - :strong:`X` - - - - * - :ref:`mongodb-atlas` - - :strong:`X` - - - - * - :ref:`mysql` - - :strong:`X` - - - - * - :ref:`nagios` - - :strong:`X` - - - * - :ref:`net-io` - :strong:`X` - @@ -340,10 +256,6 @@ - :strong:`X` - - * - :ref:`postgresql` - - :strong:`X` - - - * - :ref:`procstat` - :strong:`X` - @@ -380,18 +292,10 @@ - :strong:`X` - - * - :ref:`redis` - - :strong:`X` - - - * - :ref:`hana` - :strong:`X` - - * - :ref:`signalfx-forwarder` - - :strong:`X` - - :strong:`X` - * - :ref:`snmp` - :strong:`X` - @@ -404,10 +308,6 @@ - :strong:`X` - - * - :ref:`statsd` - - :strong:`X` - - - * - :ref:`supervisor` - :strong:`X` - @@ -428,10 +328,8 @@ - :strong:`X` - - * - :ref:`telegraf-win-perf-counters` + * - :ref:`telegraf-win-services` - :strong:`X` - - * - :ref:`telegraf-win-services` - - :strong:`X` - - \ No newline at end of file + diff --git a/_includes/gdi/otel-receivers-table.rst b/_includes/gdi/otel-receivers-table.rst index ccf8ad0c3..82acb7f9b 100644 --- a/_includes/gdi/otel-receivers-table.rst +++ b/_includes/gdi/otel-receivers-table.rst @@ -1,7 +1,9 @@ * :ref:`apache-receiver` * :ref:`apache-spark-receiver` +* :ref:`awsecscontainermetrics-receiver` * :ref:`azureeventhub-receiver` * :ref:`carbon-receiver` +* :ref:`chrony-receiver` * :ref:`cloudfoundry-receiver` * :ref:`collectd-receiver` * :ref:`discovery-receiver` diff --git a/_includes/gdi/processor-architecture-native.rst b/_includes/gdi/processor-architecture-native.rst index f0fb4a00e..b9b422074 100644 --- a/_includes/gdi/processor-architecture-native.rst +++ b/_includes/gdi/processor-architecture-native.rst @@ -220,11 +220,6 @@ - Yes - Yes - Yes - * - ``etcd`` - - Yes - - Yes - - Yes - - Yes * - ``gitlab`` - Yes - Yes diff --git a/_includes/gdi/processor-architecture-subprocess.rst b/_includes/gdi/processor-architecture-subprocess.rst index 8841854af..fc0446c94 100644 --- a/_includes/gdi/processor-architecture-subprocess.rst +++ b/_includes/gdi/processor-architecture-subprocess.rst @@ -134,11 +134,6 @@ - Yes - Experimental - No - * - ``collectd/etcd`` - - Yes - - Yes - - Experimental - - No * - ``collectd/hadoop`` - Yes - Yes diff --git a/admin/authentication/authentication-tokens/org-tokens.rst b/admin/authentication/authentication-tokens/org-tokens.rst index 54c253971..6ff0f511b 100644 --- a/admin/authentication/authentication-tokens/org-tokens.rst +++ b/admin/authentication/authentication-tokens/org-tokens.rst @@ -22,9 +22,9 @@ Power users who have access to tokens in an organization see a banner, but only Token expiry ================ -Access tokens expire one year after the creation date. For access tokens created prior to February 28, 2022, the expiration date remains 5 years from the creation date. You can rotate a token before it expires using the Splunk Observability Cloud API. For details, see :ref:`access-token-rotate`. +You can view the expiration dates of your tokens through the access token page. To view this page, select :guilabel:`Settings` and select :guilabel:`Access tokens`. By default, access tokens expire 30 days after the creation date. You can rotate a token before it expires, or you can change the default expiration date during token creation. For details, see :ref:`access-token-rotate` and :ref:`create-access-token-date`. -Every organization admin will receive an email 30 days before a token in their org expires. The email includes a link to Splunk Observability Cloud that displays a list of expiring tokens. You cannot customize the token expiration email. +By default, every organization admin receives an email 30 days before a token in their org expires. The email includes a link to Splunk Observability Cloud that displays a list of expiring tokens. To change the expiration reminder date, see :ref:`create-access-token-date`. The default access token =========================== @@ -36,48 +36,60 @@ By default, every organization has one organization-level access token. If you d Manage access tokens ======================= -To manage your access (org) tokens: +To manage your access (org) tokens, follow these steps: #. Open the :guilabel:`Settings` menu. -#. Select :menuselection:`Access Tokens`. -#. To find the access token in a large list, start entering its name in the search box. Splunk Observability Cloud returns matching results. -#. To look at the details for an access token, select the expand icon next to the token name. +#. Select :guilabel:`Access Tokens`. +#. Find your token by using the :guilabel:`Status` and :guilabel:`Scope` filters or enter the token name in the search bar. +#. Select the expand icon next to the token name. This displays details about the token. For information about the access token permissions allowed by the :guilabel:`Authorization Scopes` field value, see the permissions step in :ref:`create-access-token`. -#. If you're an organization administrator, the actions menu (|more| icon) appears to the right side of the token listing. You can select token actions from this menu. +#. (Optional) If you're an organization administrator, the actions menu (|verticaldots|) appears to the right side of the token listing. You can select token actions from this menu. -#. To change the token visibility, follow these steps: +#. See :ref:`change-token-permissions` and :ref:`change-token-expiration` to modify token permissions and token expiration settings, respectively. - #. To display the available permissions, select the right arrow in the :guilabel:`Access Token Permissions` box. The following - permission options appear: +.. _change-token-permissions: + +Change token permissions +------------------------------------- + +If you're an organization administrator, you can change token permissions for other users and teams. + +To change the token permissions, follow these steps: + +#. Select the :guilabel:`Access Token Permissions` box. Choose from the following permission options: * :menuselection:`Only Admins can Read`: Only admin users can view or read the new token. The token isn't visible to other users. * :menuselection:`Admins and Select Users or Teams can Read`: Admin users and users or teams you select can view or read the new token. The token isn't visible to anyone else. * :menuselection:`Everyone can Read`: Every user and team in the organization can view and read the token. - #. To add permissions, select the left arrow below :guilabel:`Access Token Permissions`. - #. If you selected :guilabel:`Admins and Select Users or Teams can Read`, select the users or teams to whom you want to give access: - #. Select :guilabel:`Add Team or User`. Splunk Observability Cloud displays a list of teams and users in your organization. - #. To find the team or username in a large list, start entering the name in the search box. Splunk Observability Cloud returns matching results. - Select the user or team. - #. If you need to add more teams or users, select :guilabel:`Add Team or User` again. +#. To add permissions, select the left arrow below :guilabel:`Access Token Permissions`. +#. If you selected :guilabel:`Admins and Select Users or Teams can Read`, select the users or teams to whom you want to give access. +#. To remove a team or user, select the delete icon (:strong:`X`) next to the team or username. +#. To update the token, select :guilabel:`Update`. - .. note:: +.. _change-token-expiration: - You might see the following message in the middle of the dialog: +Change token expiration date and expiration alerts +------------------------------------------------------- - You are currently giving permissions to a team with Restrict Access deactivated. This means any user can join this team and is able to access this Access Token. +To change the token expiration date and expiration alerts, follow these steps: - This message means that all users are able to join the team and then view or read the access token. +#. In the token actions menu (|verticaldots|), select :guilabel:`Expiration date`. +#. In the :guilabel:`Expiration date` box, select a new expiration date for the token. +#. To change the visibility of the expiration alert, select from the following options: - #. To remove a team or user, select the delete icon (:strong:`X`) next to the team or username. - #. To update the token, select :guilabel:`Update`. + * :menuselection:`Admins and users or teams with token permissions can receive alert`: Admins and anyone with token permissions receive an alert when the token is close to expiring. + * :menuselection:`Only admins can receive alert`: Only admins receive an alert when the token is close to expiring. +#. Configure the type of alert that your recipients receive. +#. Change the time at which recipients receive an alert. For example, a value of ``7d`` means recipients receive an alert 7 days before the token expires. +#. Select :guilabel:`Update`. -View and copy access tokens -============================== +View and copy access token secrets +==================================== -To view the value of an access token, select the token name and then select :guilabel:`Show Token`. +To view the token secret, select the token name and then select :guilabel:`Show Token`. To copy the token value, select :guilabel:`Copy`. You don't need to be an administrator to view or copy an access token. @@ -87,53 +99,77 @@ To copy the token value, select :guilabel:`Copy`. You don't need to be an admini Create an access token ========================== +To get started with creating an access token, follow these steps: + +#. Open the Splunk Observability Cloud main menu. +#. Select :menuselection:`Settings` and select :menuselection:`Access Tokens`. +#. Select :guilabel:`New Token`. + +Next, complete each step in the access token creation guided setup: + +* :ref:`create-access-token-name`. +* :ref:`create-access-token-permissions`. +* :ref:`create-access-token-date`. + .. note:: - To do the following tasks, you must be an organization administrator. + You must be an organization administrator to create access tokens. -To create an access token: +.. _create-access-token-name: + +Name the token and select the authorization scope +------------------------------------------------------------------------- + +To get started with creating the token, enter a name and scope for the token. Complete the following steps: -#. Open the Splunk Observability Cloud main menu. -#. Select :menuselection:`Settings` and select :menuselection:`Access Tokens`. -#. Select :guilabel:`New Token`. If your organization has a long list of access tokens, you might need to scroll down to the bottom of the list to access this button. #. Enter a unique token name. If you enter a token name that is already in use, even if the token is inactive, Splunk Observability Cloud doesn't accept the name. -#. Select an authorization scope for the token from 1 of the following values: - - .. note:: Assign only 1 authorization scope to each token. Applying both the :strong:`API` and :strong:`Ingest` authorization scopes to the same token might raise a security concern. +#. Select an authorization scope. See the following table for information about the authorization scopes: + + .. list-table:: + :header-rows: 1 + + * - Authorization scope + - Description + * - RUM token + - Use this scope to authenticate with RUM ingest endpoints. These endpoints use the following base URL: ``https://rum-ingest..signalfx.com/v1/rum``. + * - Ingest token + - Use this scope to authenticate with data ingestion endpoints and when using the Splunk Distribution of OpenTelemetry Collector. These endpoints use the following base URLs: + + * POST :code:`https://ingest..signalfx.com/v2/datapoint` + * POST :code:`https://ingest..signalfx.com/v2/datapoint/otlp` + * POST :code:`https://ingest..signalfx.com/v2/event` + * POST :code:`https://ingest..signalfx.com/v1/trace` - - :strong:`RUM Token`: Select this authorization scope to use the token to authenticate with RUM ingest endpoints. These endpoints use the following base URL: :code:`https://rum-ingest..signalfx.com/v1/rum`. - - .. caution:: - RUM displays the RUM token in URIs that are visible in a browser. To preserve security, you can't assign the :strong:`Ingest` or :strong:`API` authorization scope to a RUM token. + For information about these endpoints, see :new-page:`Sending data points `. + * - API token + - Use this scope to authenticate with Splunk Observability Cloud API endpoints. These endpoints use the following base URLs: - - :strong:`Ingest Token`: Select this authorization scope to use the token to authenticate with data ingestion endpoints. These endpoints use the following base URLs: + * :code:`https://api..signalfx.com` + * :code:`wss://stream..signalfx.com` - - POST :code:`https://ingest..signalfx.com/v2/datapoint` - - POST :code:`https://ingest..signalfx.com/v2/datapoint/otlp` - - POST :code:`https://ingest..signalfx.com/v2/event` - - POST :code:`https://ingest..signalfx.com/v1/trace` + When you create an access token with API authentication scope, select at least one Splunk Observability Cloud role to associate with the token. You can select from ``power``, ``usage``, or ``read_only``. To learn more about Splunk Observability Cloud roles, see :ref:`roles-and-capabilities`. - For information about these endpoints, see :new-page:`Sending data points `. + For information about these endpoints, see :new-page:`Summary of Splunk Observability Cloud API Endpoints `. - .. note:: Use the ingest autorization scope for the Splunk Distribution of the OpenTelemetry Collector. See :ref:`otel-intro`. - - :strong:`API Token`: Select this authorization scope to use the token to authenticate with Splunk Observability Cloud endpoints. Example use cases are Terraform, programmatic usage of the API for business objects, and so on. These endpoints use the following base URLs: - - - :code:`https://api..signalfx.com` - - :code:`wss://stream..signalfx.com` +#. (Optional) Add a description for the token. +#. Select :guilabel:`Next` to continue to the next step. - When you create an access token with API authentication scope, select at least one Splunk Observability Cloud role to associate with the token. You can select from ``power``, ``usage``, or ``read_only``. To learn more about Splunk Observability Cloud roles, see :ref:`roles-and-capabilities`. +.. _create-access-token-permissions: - For information about these endpoints, see :new-page:`Summary of Splunk Observability Cloud API Endpoints `. +Determine who can view and use the token +-------------------------------------------------------- -#. Edit the visibility permissions: +Next, configure token permissions so your organization's users and teams can use the token. Complete the following steps: - #. To display the available permissions, select the right arrow in the :guilabel:`Access Token Permissions` box. The following - permission options appear: +#. Edit the visibility permissions. To display the available permissions, select the :guilabel:`Access Token Permissions` box. The following + permission options appear: * :menuselection:`Only Admins can Read`: Only admin users can view or read the new token. The token isn't visible to other users. * :menuselection:`Admins and Select Users or Teams can Read`: Admin users and users or teams you select can view or read the new token. The token isn't visible to anyone else. * :menuselection:`Everyone can Read`: Every user and team in the organization can view and read the token. - #. To add permissions, select the arrow below :guilabel:`Access Token Permissions`. + + To add permissions, select the arrow below :guilabel:`Access Token Permissions`. + #. If you selected :guilabel:`Admins and Select Users or Teams can Read`, select the users or teams to whom you want to give access: #. Select :guilabel:`Add Team or User`. Splunk Observability Cloud displays a list of teams and users in your organization. @@ -150,21 +186,54 @@ To create an access token: This message means that all users are able to join the team and then view or read the access token. #. To remove a team or user, select the delete icon (:strong:`X`) next to the team or username. -#. To create the new token, select :guilabel:`Create`. +#. Select :guilabel:`Next` to continue to the final step. + +.. _create-access-token-date: + +Configure an expiration date +----------------------------------------------- + +To finish creating the token, select an expiration date for the token. + +#. In the :guilabel:`Expiration date` box, select a date at which the token will expire. The date can't be over 18 years from the token creation date. +#. In the :guilabel:`Expiration alert` box, select from one of the following options: + + * :menuselection:`Only admins can receive alert`: Only admins receive an alert when the token is close to its expiration date. + * :menuselection:`Admins and users or teams with token permissions can receive alert`: Admins and any users with token permissions receive an alert when the token is close to its expiration date. + +#. (Optional) Set a time for when Splunk Observability Cloud sends an expiration alert. For example, a value of 7 days means Splunk Observability Cloud will send an alert 7 days before the token expires. +#. Select :guilabel:`Create` to finish creating the new token. .. _access-token-rotate: Rotate an access token ============================== -You can rotate an access token using the Splunk Observability Cloud API. This creates a new secret for the token and deactivates the token's previous secret. Optionally, you can provide a grace period before the previous token secret expires. +You can rotate an access token using the access token menu or the Splunk Observability Cloud API. This creates a new secret for the token and deactivates the token's previous secret. Optionally, you can provide a grace period before the previous token secret expires. You can't rotate tokens after they expire. If you don't rotate a token before it expires, you must create a new token to replace it. .. note:: You must be a Splunk Observability Cloud admin to rotate a token. -To rotate an access token, use the ``POST /token/{name}/rotate`` endpoint in the Splunk Observability Cloud API. An API call to rotate a token looks like this: +Rotate access tokens using the token menu +------------------------------------------------------------------- + +To rotate a token using the access token menu, follow these steps: + +#. In Splunk Observability Cloud, select :guilabel:`Settings`. +#. Select :guilabel:`Access tokens`. +#. In the access tokens menu, select the token you want to rotate. +#. Select :guilabel:`Rotate token`. +#. Enter an expiration date for the new token secret, and optionally, a grace period for the current token secret. +#. Select :guilabel:`Rotate`. + +After you're finished rotating the token, update any of your OpenTelemetry Collector configurations with the new token secret before the grace period ends. + +Rotate access tokens using the Splunk Observability Cloud API +------------------------------------------------------------------- + +To rotate an access token with the API, use the ``POST /token/{name}/rotate`` endpoint in the Splunk Observability Cloud API. An API call to rotate a token looks like this: .. code-block:: bash @@ -197,11 +266,11 @@ Rename an access token To rename a token: -#. Select :menuselection:`Edit Token` from the token's actions menu (|more|). +#. Select :menuselection:`Edit Token` from the token's actions menu (|verticaldots|). #. Enter a new name for the token. #. Select :guilabel:`OK`. -Renaming a token does not affect the value of the token. +Renaming a token does not affect the token's secret. .. note:: @@ -214,11 +283,19 @@ Deactivate or activate an access token You can't delete tokens. You can only deactivate them. -To deactivate a token, select :menuselection:`Disable` from the token's actions menu (|more| icon). -The line that displays the token has a shaded background, which indicates that the -token is inactive. The UI displays deactivated tokens at the end of the tokens list, -after the activated tokens. +To deactivate a token, select :menuselection:`Deactivate` from the token's actions menu (|verticaldots|). + +To activate a deactivated token, select :menuselection:`Activate` from the deactivated token's actions menu (|verticaldots|). + +You can search for activated or deactivated tokens using the :guilabel:`Status` filter in the access tokens page. + +Manage token limits +========================================= + +To change limits for your access tokens, including host and container limits, follow these steps: + +#. Select the token that you want to edit. This opens the token detail page. +#. Select the token actions menu (|verticaldots|), and select :guilabel:`Manage limits`. +#. In the :guilabel:`Manage limits` menu, add the new token limits. -To activate a deactivated token, select :menuselection:`Enable` from the deactivated -token's actions menu (|more| icon). The line that displays the token has a light background, -which indicates that the token is inactive. +To learn more about token limits, see :ref:`admin-manage-usage`. \ No newline at end of file diff --git a/admin/notif-services/servicenow.rst b/admin/notif-services/servicenow.rst index 8b29d70cd..29d2116da 100644 --- a/admin/notif-services/servicenow.rst +++ b/admin/notif-services/servicenow.rst @@ -44,7 +44,7 @@ Before you set up the integration, choose a ServiceNow issue type from the follo - ``user_admin``, ``itil`` - ``/api/now/v2/table/incident`` * - Event - - None + - ``evt_mgmt_integration``, only if :guilabel:`Requires ACL authorization` is selected for :strong:`Inbound Event Default Bulk Endpoint` in :strong:`Scripted Rest APIs`. To learn more, see the :new-page:`ServiceNow support article on events `. - ``/api/global/em/jsonv2`` Make note of the role and receiving endpoint that corresponds to your issue type before proceeding with :ref:`servicenow2`. @@ -112,9 +112,9 @@ To create a ServiceNow integration in Splunk Observability Cloud: To troubleshoot potential blind server-side request forgeries (SSRF), Splunk Observability Cloud has included ``\*.service-now.com`` on an allow list. As a result, if you enter a domain name that is rejected by Splunk Observability Cloud, contact :ref:`support` to update the allow list of domain names. -#. Select :strong:`Incident`, :strong:`Problem`, or :strong:`Event` to indicate the issue type you want the integration to create in ServiceNow. If necessary, you can create a second integration using the other issue type. This lets you create an incident issue for one detector rule and a problem issue for another detector rule. The following table shows the roles required to create each issue type: +#. Select :strong:`Incident`, :strong:`Problem`, or :strong:`Event` to indicate the issue type you want the integration to create in ServiceNow. If necessary, you can create a second integration using another issue type. This lets you create an incident issue for one detector rule and a problem issue for another detector rule. -#. :strong:`Save`. +#. Select :strong:`Save`. #. If Splunk Observability Cloud can validate the ServiceNow username, password, and instance name combination, a :strong:`Validated!` success message displays. If an error displays instead, make sure that the values you entered match the values in ServiceNow. diff --git a/alerts-detectors-notifications/detectors-best-practices.rst b/alerts-detectors-notifications/detectors-best-practices.rst new file mode 100644 index 000000000..73dc271fc --- /dev/null +++ b/alerts-detectors-notifications/detectors-best-practices.rst @@ -0,0 +1,52 @@ +.. _detectors-best-practices: + + +************************************************************************** +Best practices for creating detectors in Splunk Observability Cloud +************************************************************************** + +.. meta:: + :description: Splunk Observability Cloud uses detectors, events, alerts, and notifications to tell you when certain criteria are met. When a detector condition is met, the detector generates an event, triggers an alert, and can send one or more notifications. Follow these best practices in Splunk Observability Cloud when creating a detector. + +Splunk Observability Cloud uses detectors to set conditions that determine when to send an alert or notification to the appropriate team members. Detectors evaluate metric time series against a specified condition, and optionally for a duration. When a condition is met, detectors generate events with a level of severity. Severity levels are Info, Warning, Minor, Major, and Critical. These events are alerts that can trigger notifications in incident management platforms, such as PagerDuty, or messaging systems, such as Slack or email. + +Using static thresholds +========================================================================== +The most basic kind of alert triggers immediately when a simple metric crosses a static threshold. An example is anytime CPU utilization goes above 70%. Fixed thresholds are easy to implement and interpret when there are absolute goals to measure against. For example, if you know the typical memory per CPU profile of a certain application, you can define bounds that define normal state. Or, if you have a business requirement to serve requests within a certain time period, you know what is an unacceptable latency for that function. See :ref:`static-threshold` for more information. + +Consistent signal types +========================================================================== +For a detector to work properly, the signal that it evaluates must represent a consistent type of measurement. For example, when Splunk Observability Cloud reports ``cpu.utilization``, it is a value between 0 and 100 and represents the average utilization across all CPU cores for a single Linux instance or host. + +Do not use wildcards. If you use wildcards in your metric name, make sure that the wildcards do not mistakenly include metrics of different types. For example, if you enter ``jvm.*`` as the metric name, your detector can evaluate to ``jvm.heap``, ``jvm.uptime`` and ``jvm.cpu.load`` (assuming each is a metric names in use in your organization) against the same threshold, which might lead to unexpected results. + +Viewing at native data resolution +========================================================================== +A common and easy way to create a detector is to first create a chart, which lets you visualize the behavior of the signal you want to alert on, then convert it to a detector. See :new-page:`Create a detector from a chart ` to learn how. If you choose to use this method to create a detector, make sure you are visualizing the data at its native resolution, as this gives you the most accurate picture of the data that your detector evaluates. For example, if you create a detector using a metric that reports once every 10 seconds, make sure the time range for your chart is small enough (say, 15 minutes) to see individual measurements every 10 seconds. + +By default, Splunk Observability Cloud chooses a chart display resolution that fits within the time range you choose, and summarizes the data to match that resolution. For example, if you use a metric that reports every 10 seconds, but you look at a 1-day window, then by default the data you see on the chart represents 30-minute intervals. Depending on the rollup or summarization method, this could mean that any peaks or dips average out, which gives you an inaccurate understanding of your signal and what constitutes an appropriate detector threshold. Also, analytics pipelines are applied to the rolled-up data, so the meaning of a calculation might change if the resolution changes. For example, duration parameters, which you can use for timeshifting and smoothing data, have no effect when they are smaller than the resolution. + +.. _monitor-signal: + +Create detectors that monitor a single signal across a population +========================================================================== +Splunk Observability Cloud provides a simple and concise way of defining detectors that monitor a large number of similar items like the CPU utilization for all of the hosts in a given cluster. It accomplishes this through the metadata that is associated with metric time series, which is analogous to how that metadata - dimensions, properties or tags - creates charts. + +Let's look at an example. If you have a group of 30 hosts that provide a clustered service like Kafka, it normally includes a dimension like ``service:kafka`` with all of the metrics coming from those hosts. In this case, if you want to track whether CPU utilization remains below 70% for each of those hosts, you can create a single detector for the ``cpu.utilization`` metric that filters hosts using the ``service:kafka`` dimension and evaluates them against the static threshold of 70. This detector triggers individual alerts for each host whose CPU utilization exceeds the threshold - just as if you had 30 separate detectors - but you only need to create one detector, not 30. + +In addition, if the population changes - say, because the cluster grows to 40 hosts - you do not need to make any changes to your detector. As long as you include the ``service:kafka`` dimension for metrics coming from the new hosts, the existing detector finds them and automatically includes them in the threshold evaluation. + +Detectors that monitor a single signal work best when all of the members of the population have the same threshold, and the same notification policy. For example, they might publish alerts into the same Slack channel. If you have different thresholds or notification policies, you must create multiple detectors (one for each permutation of threshold and notification) or take advantage of the const function in SignalFlow. In any case, the likely number of such detectors is still fewer than the count of individual members that it monitors. It is important to create a detector for a signal, not for a microservice, in order to avoid accumulating too many detectors that trigger a multitude of alerts. + +Use aggregation to monitor sub-groups within a population +========================================================================== +You can also use detectors to monitor sub-groups within the population. For example, let’s say you have 100 hosts in total, divided among 10 services. You want to make sure the 95th percentile of CPU utilization across the cluster of hosts that provide each of those services remains below 70%. In this case, create a single detector for ``cpu.utilization``, then apply an analytics function of P95, and group by ``service``. The aggregation approach works only if ``service`` is a dimension or property. The aggregation approach does not work if ``service`` is a tag. + +This aggregation detector triggers alerts for each service, just as if you had 10 separate detectors - but you only need to create one detector, not 10. If you add additional services, the detector automatically monitors them as long as you have included a ``service`` dimension or property for the new services' metrics. + +You can also monitor individual members of a population for deviation from the population norm, optionally grouping by dimensions or properties, with the Outlier Detection built-in alert condition. See the population_comparison detector in the signalflow-library in GitHub at :new-page:`https://github.com/signalfx/signalflow-library/tree/master/library/signalfx/detectors/population_comparison`. + + + + + diff --git a/gdi/databases.rst b/gdi/databases.rst index 73d064224..9346c22a4 100644 --- a/gdi/databases.rst +++ b/gdi/databases.rst @@ -15,7 +15,6 @@ Configure application receivers for databases monitors-databases/apache-spark monitors-databases/cassandra monitors-databases/consul - monitors-databases/etcd monitors-databases/exec-input monitors-databases/hadoop monitors-databases/hadoopjmx @@ -38,7 +37,6 @@ These application receivers gather metrics from their associated database-relate * :ref:`spark` * :ref:`cassandra` * :ref:`consul` -* :ref:`etcd` * :ref:`exec-input` * :ref:`hadoop` * :ref:`hadoopjmx` diff --git a/gdi/hosts-servers.rst b/gdi/hosts-servers.rst index c0e8e7056..b6236051d 100644 --- a/gdi/hosts-servers.rst +++ b/gdi/hosts-servers.rst @@ -29,7 +29,6 @@ Configure application receivers for hosts and servers monitors-hosts/elasticsearch-query monitors-hosts/filesystems monitors-hosts/haproxy - monitors-hosts/health-checker monitors-hosts/host-metadata opentelemetry/components/host-metrics-receiver monitors-hosts/host-processes @@ -72,7 +71,6 @@ These application receivers gather metrics from their associated host- and serve * :ref:`elasticsearch-query` * :ref:`filesystems` * :ref:`haproxy` -* :ref:`health-checker` * :ref:`host-metadata` * :ref:`host-metrics-receiver` * :ref:`processes` diff --git a/gdi/integrations-list.rst b/gdi/integrations-list.rst index 2a475cd3e..9c35227bb 100644 --- a/gdi/integrations-list.rst +++ b/gdi/integrations-list.rst @@ -107,24 +107,28 @@ For more information, see :ref:`get-started-rum`. .. raw:: html -

OpenTelemetry receivers

+

Applications and services

-Learn more at :ref:`OpenTelemetry receivers `. +.. raw:: html + + +

OpenTelemetry receivers

+ -These are the available OTel receivers: +You can monitor your applications and services with native OpenTelementry receivers. Learn more at :ref:`OpenTelemetry receivers `. + +These are the available OpenTelemetry receivers: .. include:: /_includes/gdi/otel-receivers-table.rst .. raw:: html -

Application and host integrations

+

Smart Agent integrations

-.. note:: The SignalFx Smart Agent has reached End of Support. While the agent can capture and export telemetry to Splunk Observability Cloud, Splunk no longer provides any support, feature updates, security, or bug fixes. Such requests are not bound by any SLAs. - -Smart Agent integrations and application receivers are available and supported through the Splunk Distribution of the OpenTelemetry Collector. For more information, see :ref:`migration-monitors`. +Smart Agent integrations are available and supported through the Splunk Distribution of the OpenTelemetry Collector. For more information, see :ref:`migration-monitors`. Browse available monitors by category: @@ -142,10 +146,20 @@ Browse available monitors by category: * :ref:`Applications: Orchestration ` * :ref:`Applications: Prometheus ` -These are the available Smart Agent monitors: +These are the available Smart Agent integrations: .. include:: /_includes/gdi/application-receiver-table.rst +.. raw:: html + + +

Deprecated integrations

+ + +These Smart Agent integrations are deprecated: + +.. include:: /_includes/gdi/application-receiver-table-deprecated.rst + .. raw:: html diff --git a/gdi/monitors-databases/consul.rst b/gdi/monitors-databases/consul.rst index 3ca721cda..36fade8a9 100644 --- a/gdi/monitors-databases/consul.rst +++ b/gdi/monitors-databases/consul.rst @@ -1,12 +1,18 @@ .. _consul: -Consul datastore -================ +Consul datastore (deprecated) +================================ .. meta:: :description: Use this Splunk Observability Cloud integration for the Consul datastore monitor. See benefits, install, configuration, and metrics -The Splunk Distribution of OpenTelemetry Collector uses the Smart Agent receiver with the +.. caution:: + + This integration is deprecated and will be removed in a future release. During this period only critical security and bug fixes are provided. When End of Support is reached, the monitor will be removed and no longer be supported, and you won't be able to use it to send data to Splunk Observability Cloud. + + To forward Consul datastore metrics to Splunk Observability Cloud use the :ref:`statsd-receiver` or :ref:`prometheus-receiver` instead. + +The Splunk Distribution of the OpenTelemetry Collector uses the Smart Agent receiver with the Consul datastore monitor type to monitor Consul datastores and collect metrics from the following endpoints: diff --git a/gdi/monitors-databases/etcd.rst b/gdi/monitors-databases/etcd.rst deleted file mode 100644 index c86242bb0..000000000 --- a/gdi/monitors-databases/etcd.rst +++ /dev/null @@ -1,180 +0,0 @@ -.. _etcd: - -etcd server -=========== - -.. meta:: - :description: Use this Splunk Observability Cloud integration for the etcd monitor. See benefits, install, configuration, and metrics - -The Splunk Distribution of the OpenTelemetry Collector uses the Smart Agent receiver with the etcd monitor type to report etcd server metrics under the ``/metrics`` path on its client port. Optionally, you can edit the location using ``--listen-metrics-urls``. This integration only collects metrics from the Prometheus endpoint. - -Benefits --------- - -.. include:: /_includes/benefits.rst - -Installation ------------- - -.. include:: /_includes/collector-installation.rst - -Configuration -------------- - -.. include:: /_includes/configuration.rst - -Example -~~~~~~~ - -To activate this integration, add the following to your Collector -configuration: - -.. code-block:: yaml - - receivers: - smartagent/etcd: - type: etcd - ... # Additional config - -Next, add the monitor to the ``service.pipelines.metrics.receivers`` -section of your configuration file: - -.. code:: yaml - - service: - pipelines: - metrics: - receivers: [smartagent/etcd] - -Configuration settings -~~~~~~~~~~~~~~~~~~~~~~ - -The following table shows the configuration options for this monitor: - -.. list-table:: - :widths: 18 18 18 18 - :header-rows: 1 - - - - - - Option - - Required - - Type - - Description - - - - - ``httpTimeout`` - - no - - ``int64`` - - HTTP timeout duration for both read and writes. This should be a - duration string that is accepted by - https://golang.org/pkg/time/#ParseDuration (**default:** - ``10s``) - - - - - ``username`` - - no - - ``string`` - - Basic Auth username to use on each request, if any. - - - - - ``password`` - - no - - ``string`` - - Basic Auth password to use on each request, if any. - - - - - ``useHTTPS`` - - no - - ``bool`` - - If ``true``, the agent will connect to the server using HTTPS - instead of plain HTTP. (**default:** ``false``) - - - - - ``httpHeaders`` - - no - - ``map of strings`` - - A map of HTTP header names to values. Comma separated multiple - values for the same message-header is supported. - - - - - ``skipVerify`` - - no - - ``bool`` - - If useHTTPS is ``true`` and this option is also ``true``, the - exporter TLS cert will not be verified. (**default:** - ``false``) - - - - - ``caCertPath`` - - no - - ``string`` - - Path to the CA cert that has signed the TLS cert, unnecessary if - ``skipVerify`` is set to ``false``. - - - - - ``clientCertPath`` - - no - - ``string`` - - Path to the client TLS cert to use for TLS required connections - - - - - ``clientKeyPath`` - - no - - ``string`` - - Path to the client TLS key to use for TLS required connections - - - - - ``host`` - - **yes** - - ``string`` - - Host of the exporter - - - - - ``port`` - - **yes** - - ``integer`` - - Port of the exporter - - - - - ``useServiceAccount`` - - no - - ``bool`` - - Use pod service account to authenticate. (**default:** - ``false``) - - - - - ``metricPath`` - - no - - ``string`` - - Path to the metrics endpoint on the exporter server, usually - ``/metrics`` (the default). (**default:** ``/metrics``) - - - - - ``sendAllMetrics`` - - no - - ``bool`` - - Send all the metrics that come out of the Prometheus exporter - without any filtering. This option has no effect when using the - prometheus exporter monitor directly since there is no built-in - filtering, only when embedding it in other monitors. - (**default:** ``false``) - -Metrics -------- - -The following metrics are available for this integration: - -.. raw:: html - -
- -Notes -~~~~~ - -.. include:: /_includes/metric-defs.rst - -Troubleshooting ---------------- - -.. include:: /_includes/troubleshooting-components.rst diff --git a/gdi/monitors-hosts/amazon-ecs-metadata.rst b/gdi/monitors-hosts/amazon-ecs-metadata.rst index b2b1d9fcb..315f600bc 100644 --- a/gdi/monitors-hosts/amazon-ecs-metadata.rst +++ b/gdi/monitors-hosts/amazon-ecs-metadata.rst @@ -1,15 +1,14 @@ .. _amazon-ecs-metadata: -Amazon ECS Task Metadata endpoint -================================= +Amazon ECS Task Metadata endpoint (deprecated) +================================================================== .. meta:: :description: Use this Splunk Observability Cloud integration for the ECS metadata monitor. See benefits, install, configuration, and metrics -The Splunk Distribution of OpenTelemetry Collector uses the Smart Agent receiver with the -``ecs-metadata`` monitor type to read metadata and Docker stats from -Amazon ECS Task Metadata Endpoint version 2. This integration does not -currently support CPU share and quota metrics. +.. caution:: This integration is deprecated. If you're using the Splunk Distribution of the OpenTelemetry Collector and want to monitor task metadata and docker stats from Amazon ECS use the native OpenTelemetry component :ref:`awsecscontainermetrics-receiver` instead. + +The Splunk Distribution of the OpenTelemetry Collector uses the Smart Agent receiver with the ``ecs-metadata`` monitor type to read metadata and Docker stats from Amazon ECS Task Metadata Endpoint version 2. This integration does not currently support CPU share and quota metrics. This integration is only available on Kubernetes and Linux. diff --git a/gdi/monitors-hosts/chrony.rst b/gdi/monitors-hosts/chrony.rst index 8b3183ab3..38a3a2bee 100644 --- a/gdi/monitors-hosts/chrony.rst +++ b/gdi/monitors-hosts/chrony.rst @@ -1,12 +1,14 @@ .. _chrony: -Chrony NTP -========== +Chrony NTP (deprecated) +============================== .. meta:: :description: Use this Splunk Observability Cloud integration for the Chrony NTP monitor. See benefits, install, configuration, and metrics -The Splunk Distribution of OpenTelemetry Collector uses the Smart Agent receiver with the +.. caution:: This integration is deprecated. If you're using the Splunk Distribution of the OpenTelemetry Collector and want to monitor Chrony use the native OpenTelemetry component :ref:`chrony-receiver` instead. + +The Splunk Distribution of the OpenTelemetry Collector uses the Smart Agent receiver with the Chrony NTP monitor type to monitor NTP data from a chrony server, such as clock skew and per-peer stratum. To talk to chronyd, this integration mimics what the chronyc control program does on the wire. diff --git a/gdi/monitors-hosts/disk.rst b/gdi/monitors-hosts/disk.rst index cd382fbe0..b5d7c1758 100644 --- a/gdi/monitors-hosts/disk.rst +++ b/gdi/monitors-hosts/disk.rst @@ -6,7 +6,7 @@ Disk and partition (deprecated) .. meta:: :description: Use this Splunk Observability Cloud integration for the disks monitor. See benefits, install, configuration, and metrics -.. note:: This integration is deprecated. If you're using the Splunk Distribution of the OpenTelemetry Collector and want to collect disk I/O metrics, use the native OTel component :ref:`host-metrics-receiver`. +.. caution:: This integration is deprecated. If you're using the Splunk Distribution of the OpenTelemetry Collector and want to collect disk I/O metrics, use the native OTel component :ref:`host-metrics-receiver`. Configuration settings ---------------------- diff --git a/gdi/monitors-hosts/health-checker.rst b/gdi/monitors-hosts/health-checker.rst deleted file mode 100644 index f5b4e8cb2..000000000 --- a/gdi/monitors-hosts/health-checker.rst +++ /dev/null @@ -1,177 +0,0 @@ -.. _health-checker: - -Health Checker -============== - -.. meta:: - :description: Use this Splunk Observability Cloud integration for the Health Checker monitor. See benefits, install, configuration, and metrics - -The Splunk Distribution of the OpenTelemetry Collector uses the Smart Agent receiver with the -Health Checker monitor type to check whether the configured JSON value -is returned in the response body. - -Benefits --------- - -.. include:: /_includes/benefits.rst - -Installation ------------- - -.. include:: /_includes/collector-installation.rst - -Configuration -------------- - -.. include:: /_includes/configuration.rst - -### Example - -To activate this integration, add the following to your Collector -configuration: - -.. code-block:: yaml - - receivers: - smartagent/health-checker: - type: collectd/health-checker - ... # Additional config - -Next, add the monitor to the ``service.pipelines.metrics.receivers`` -section of your configuration file: - -.. code-block:: yaml - - service: - pipelines: - metrics: - receivers: [smartagent/health-checker] - -Configuration settings -~~~~~~~~~~~~~~~~~~~~~~ - -The following table shows the configuration options for the Health -Checker monitor: - -.. list-table:: - :widths: 18 18 18 18 - :header-rows: 1 - - - - - - Option - - Required - - Type - - Description - - - - - ``pythonBinary`` - - no - - ``string`` - - Path to a python binary that should be used to execute the - Python code. If not set, a built-in runtime will be used. Can - include arguments to the binary as well. - - - - - ``host`` - - **yes** - - ``string`` - - - - - - - ``port`` - - **yes** - - ``integer`` - - - - - - - ``name`` - - no - - ``string`` - - - - - - - ``path`` - - no - - ``string`` - - The HTTP path that contains a JSON document to verify - (**default:** ``/``) - - - - - ``jsonKey`` - - no - - ``string`` - - If ``jsonKey`` and ``jsonVal`` are given, the given endpoint - will be interpreted as a JSON document and will be expected to - contain the given key and value for the service to be - considered healthy. - - - - - ``jsonVal`` - - no - - ``any`` - - This can be either a string or numeric type - - - - - ``useHTTPS`` - - no - - ``bool`` - - If ``true``, the endpoint will be connected to on HTTPS instead - of plain HTTP. It is invalid to specify this if ``tcpCheck`` is - ``true``. (**default:** ``false``) - - - - - ``skipSecurity`` - - no - - ``bool`` - - If ``true``, and ``useHTTPS`` is ``true``, the server's SSL/TLS - cert will not be verified. (**default:** ``false``) - - - - - ``tcpCheck`` - - no - - ``bool`` - - If ``true``, the plugin will verify that it can connect to the - given host/port value. JSON checking is not supported. - (**default:** ``false``) - -Metrics -------- - -The following metrics are available for this integration: - -.. list-table:: - :widths: 13 34 13 13 - :header-rows: 1 - - - - - - Name - - Description - - Sample value - - Category - - - - - ``gauge.service.health.status`` - - The HTTP response status code for the request made to the - application being monitored. A ``200`` value means an HTTP 200 - OK success status response was returned, so the application is - healthy. - - ``200`` - - Default - - - - - ``gauge.service.health.value`` - - ``0`` means an unhealthy state, and ``1`` means a healthy state. - - ``0`` or ``1`` - - Default - -Notes -~~~~~ - -.. include:: /_includes/metric-defs.rst - -Troubleshooting ---------------- - -.. include:: /_includes/troubleshooting-components.rst diff --git a/gdi/monitors-hosts/host-processload.rst b/gdi/monitors-hosts/host-processload.rst index 49c43cdc3..df45b4ab5 100644 --- a/gdi/monitors-hosts/host-processload.rst +++ b/gdi/monitors-hosts/host-processload.rst @@ -6,7 +6,7 @@ Host process load (deprecated) .. meta:: :description: Use this Splunk Observability Cloud integration for the load monitor. See benefits, install, configuration, and metrics -.. note:: This integration is deprecated. If you're using the Splunk Distribution of the OpenTelemetry Collector and want to collect CPU load metrics, use the native OTel component :ref:`host-metrics-receiver`. +.. caution:: This integration is deprecated. If you're using the Splunk Distribution of the OpenTelemetry Collector and want to collect CPU load metrics use the native OTel component :ref:`host-metrics-receiver`. Configuration options --------------------- diff --git a/gdi/monitors-hosts/win-services.rst b/gdi/monitors-hosts/win-services.rst index 3938f1df5..8e7a10e30 100644 --- a/gdi/monitors-hosts/win-services.rst +++ b/gdi/monitors-hosts/win-services.rst @@ -1,7 +1,7 @@ .. _telegraf-win-services: -Windows Services -================ +Windows Services +================================ .. meta:: :description: Use this Splunk Observability Cloud integration for the Telegraf Win_services monitor. See benefits, install, configuration, and metrics diff --git a/gdi/monitors-languages/asp-dot-net.rst b/gdi/monitors-languages/asp-dot-net.rst index 5ccc4ccec..7157d7076 100644 --- a/gdi/monitors-languages/asp-dot-net.rst +++ b/gdi/monitors-languages/asp-dot-net.rst @@ -6,9 +6,13 @@ ASP.NET (deprecated) .. meta:: :description: Use this Splunk Observability Cloud integration for the ASP.NET app monitor. See benefits, install, configuration, and metrics -.. note:: This integration is deprecated and will be removed in February 2025. To forward data to Splunk Observability Cloud, use the Splunk Distribution of OpenTelemetry .NET. For a full list of collected metrics, refer to :ref:`dotnet-otel-metrics-attributes`. +.. caution:: + + This integration is deprecated and will reach End of Support in February 2025. During this period only critical security and bug fixes are provided. When End of Support is reached, the monitor will be removed and no longer be supported, and you won't be able to use it to send data to Splunk Observability Cloud. -The Splunk Distribution of OpenTelemetry Collector uses the Smart Agent receiver with the + To forward data from a .NET application to Splunk Observability Cloud use the :ref:`Splunk Distribution of OpenTelemetry .NET ` instead. + +The Splunk Distribution of the OpenTelemetry Collector uses the Smart Agent receiver with the ``aspdotnet`` monitor type to retrieve metrics for requests, errors, sessions, and worker processes from ASP.NET applications. diff --git a/gdi/monitors-monitoring/win_perf_counters.rst b/gdi/monitors-monitoring/win_perf_counters.rst index 8434134c9..1699e2b9b 100644 --- a/gdi/monitors-monitoring/win_perf_counters.rst +++ b/gdi/monitors-monitoring/win_perf_counters.rst @@ -1,12 +1,16 @@ .. _telegraf-win-perf-counters: -Windows Performance Counters -============================ +Windows Performance Counters (deprecated) +======================================================== .. meta:: :description: Use this Splunk Observability Cloud integration for the Telegraf win_perf_counters monitor for Windows. See benefits, install, configuration, and metrics -.. note:: For information on the OpenTelemetry receiver based on the Windows Performance Counters input plugin, see :ref:`Windows Performance Counters receiver `. +.. caution:: + + This integration is deprecated and will reach End of Support in a future release. During this period only critical security and bug fixes are provided. When End of Support is reached, the monitor will be removed and no longer be supported, and you won't be able to use it to send data to Splunk Observability Cloud. + + To forward metrics from Windows Performance Counters to Splunk Observability Cloud use the :ref:`windowsperfcounters-receiver` instead. The Splunk Distribution of the OpenTelemetry Collector uses the Smart Agent receiver with the ``telegraf/win_perf_counters`` monitor type to receive metrics from Windows performance counters. diff --git a/gdi/monitors-network/statsd.rst b/gdi/monitors-network/statsd.rst index 4dbc6f052..c933a4972 100644 --- a/gdi/monitors-network/statsd.rst +++ b/gdi/monitors-network/statsd.rst @@ -1,11 +1,17 @@ .. _statsd: -Statsd -====== +Statsd (deprecated) +====================== .. meta:: :description: Use this Splunk Observability Cloud integration for the Statsd monitor. See benefits, install, configuration, and metrics +.. caution:: + + This integration is deprecated and will be removed in a future release. During this period only critical security and bug fixes are provided. When End of Support is reached, the monitor will be removed and no longer be supported, and you won't be able to use it to send data to Splunk Observability Cloud. + + To forward statsd metrics to Splunk Observability Cloud use the :ref:`statsd-receiver` instead. + The Splunk Distribution of the OpenTelemetry Collector uses the Smart Agent receiver with the ``statsd`` monitor type to collect statsd metrics. It listens on a configured address and port to receive the statsd metrics. This integration supports certain Stats types, which are dispatched as ``counter`` or ``gauges`` types in Splunk Observability Cloud, as displayed in the table. Statsd extensions such as tags are not supported. diff --git a/gdi/opentelemetry/collector-kubernetes/kubernetes-config-advanced.rst b/gdi/opentelemetry/collector-kubernetes/kubernetes-config-advanced.rst index 311195e10..ba14928b5 100644 --- a/gdi/opentelemetry/collector-kubernetes/kubernetes-config-advanced.rst +++ b/gdi/opentelemetry/collector-kubernetes/kubernetes-config-advanced.rst @@ -95,12 +95,12 @@ Availability The following components provide control plane metrics: -* :ref:`CoreDNS `. -* :ref:`etcd`. To retrieve etcd metrics, see :new-page:`Setting up etcd metrics `. -* :ref:`Kubernetes controller manager `. -* :ref:`Kubernetes API server `. -* :ref:`Kubernetes proxy `. -* :ref:`Kubernetes scheduler `. +* :ref:`CoreDNS ` +* :ref:`awsecscontainermetrics-receiver` +* :ref:`Kubernetes controller manager ` +* :ref:`Kubernetes API server ` +* :ref:`Kubernetes proxy ` +* :ref:`Kubernetes scheduler ` Use custom configurations for non-standard control plane components ----------------------------------------------------------------------------- diff --git a/gdi/opentelemetry/collector-kubernetes/metrics-ootb-k8s.rst b/gdi/opentelemetry/collector-kubernetes/metrics-ootb-k8s.rst index 0da0e30ad..a154fe129 100644 --- a/gdi/opentelemetry/collector-kubernetes/metrics-ootb-k8s.rst +++ b/gdi/opentelemetry/collector-kubernetes/metrics-ootb-k8s.rst @@ -553,7 +553,7 @@ Control plane metrics To see the control plane metrics the Collector provides, see: * :ref:`CoreDNS ` -* :ref:`etcd` +* :ref:`awsecscontainermetrics-receiver` * :ref:`Kubernetes controller manager ` * :ref:`Kubernetes API server ` * :ref:`Kubernetes proxy ` diff --git a/gdi/opentelemetry/components.rst b/gdi/opentelemetry/components.rst index ab85e7e6b..090ad0a75 100644 --- a/gdi/opentelemetry/components.rst +++ b/gdi/opentelemetry/components.rst @@ -55,12 +55,18 @@ The Splunk Distribution of the OpenTelemetry Collector includes and supports the * - :ref:`apache-spark-receiver` (``apachespark``) - Fetches metrics for an Apache Spark cluster through the Apache Spark REST API. - Metrics + * - :ref:`awsecscontainermetrics-receiver` (``awsecscontainermetrics``) + - Reads task metadata and docker stats from Amazon ECS and generates resource usage metrics. + - Metrics * - :ref:`azureeventhub-receiver` (``azureeventhub``) - Pulls logs from an Azure event hub. - Logs * - :ref:`carbon-receiver` (``carbon``) - Receives metrics in Carbon plaintext protocol. - Metrics + * - :ref:`chrony-receiver` (``chrony``) + - Go implementation of the ``chronyc`` command to track portability across systems and platforms. + - Metrics * - :ref:`cloudfoundry-receiver` (``cloudfoundry``) - Connects to the Reverse Log Proxy (RLP) gateway of Cloud Foundry to extract metrics. - Metrics diff --git a/gdi/opentelemetry/components/a-components-receivers.rst b/gdi/opentelemetry/components/a-components-receivers.rst index a91dd12c9..a6141772a 100644 --- a/gdi/opentelemetry/components/a-components-receivers.rst +++ b/gdi/opentelemetry/components/a-components-receivers.rst @@ -14,8 +14,10 @@ Collector components: Receivers apache-receiver apache-spark-receiver + awsecscontainermetrics-receiver azureeventhub-receiver carbon-receiver + chrony-receiver cloudfoundry-receiver collectd-receiver discovery-receiver diff --git a/gdi/opentelemetry/components/awsecscontainermetrics-receiver.rst b/gdi/opentelemetry/components/awsecscontainermetrics-receiver.rst new file mode 100644 index 000000000..eebdfa9f0 --- /dev/null +++ b/gdi/opentelemetry/components/awsecscontainermetrics-receiver.rst @@ -0,0 +1,14 @@ +.. _awsecscontainermetrics-receiver: + +************************************** +AWS ECS container metrics receiver +************************************** + +.. meta:: + :description: The AWS ECS Container Metrics Receiver (awsecscontainermetrics) reads task metadata and docker stats from Amazon ECS Task Metadata Endpoint, and generates resource usage metrics (such as CPU, memory, network, and disk) from them. + +The Splunk Distribution of the OpenTelemetry Collector supports the AWS ECS container metrics receiver. Documentation is planned for a future release. + +To find information about this component in the meantime, see :new-page:`AWS ECS container metrics receiver ` on GitHub. + + diff --git a/gdi/opentelemetry/components/chrony-receiver.rst b/gdi/opentelemetry/components/chrony-receiver.rst new file mode 100644 index 000000000..e61e86c09 --- /dev/null +++ b/gdi/opentelemetry/components/chrony-receiver.rst @@ -0,0 +1,12 @@ +.. _chrony-receiver: + +**************************** +Chrony receiver +**************************** + +.. meta:: + :description: Go implementation of the command chronyc tracking to allow for portability across systems and platforms. + +The Splunk Distribution of the OpenTelemetry Collector supports the Chrony receiver. Documentation is planned for a future release. + +To find information about this component in the meantime, see :new-page:`Chrony receiver ` on GitHub. diff --git a/gdi/opentelemetry/deployments/deployments-ecs-ec2.rst b/gdi/opentelemetry/deployments/deployments-ecs-ec2.rst index dcdd283bf..6b6decab7 100644 --- a/gdi/opentelemetry/deployments/deployments-ecs-ec2.rst +++ b/gdi/opentelemetry/deployments/deployments-ecs-ec2.rst @@ -5,9 +5,9 @@ Deploy the Collector with Amazon ECS EC2 ******************************************************** .. meta:: - :description: Deploy the Splunk Observability Cloud OpenTelemetry Collector as a Daemon service in an Amazon ECS EC2 cluster. + :description: Deploy the Splunk Observability Cloud OpenTelemetry Collector as a Sidecar in an Amazon ECS EC2 cluster. -Use the guided setup to deploy the Collector as a Daemon service in an Amazon ECS EC2 cluster. The guided setup provides a JSON task definition for the Collector. +Use the guided setup to deploy the Collector as a Sidecar in an Amazon ECS EC2 cluster. The guided setup provides a JSON task definition for the Collector. Choose one of the following Collector configuration options: @@ -28,76 +28,43 @@ Getting started The following sections describe how to create a task definition and launch the Collector. A task definition is required to run Docker containers in Amazon ECS. After creating the task definition, you need to launch the Collector. -Create a task definition +Add the Collector as a Sidecar --------------------------------- -.. note:: - - Knowledge of Amazon ECS using launch type EC2 is assumed. See :new-page:`Getting started with the classic console using Amazon EC2 ` for further reading. - -Creating the task definition requires using release v0.34.1 or newer (which corresponds to image tag 0.34.1 and newer) of the Collector. See the :new-page:`image repository ` to download the latest image. +.. note:: To use this option you need to be familiar with Amazon ECS EC2 launch type. See :new-page:`Getting started with the classic console using Amazon EC2 ` for further reading. -To create the task definition: +Open the ECS task definition to which the Collector Sidecar is going to be added: 1. Locate the task definition for the Collector from the :new-page:`repository `. -2. Replace ``MY_SPLUNK_ACCESS_TOKEN`` and ``MY_SPLUNK_REALM`` with valid values. You should pin the image version to a specific version instead of ``latest`` to avoid upgrade issues. -3. Create a task definition of EC2 launch type. See :new-page:`Creating a task definition using the new console ` for the instructions. The supplied task definition is a minimal definition. See :new-page:`Task definition parameters ` for additional configuration options. +2. Merge the definitions of the Collector with the existing ECS task definition. +3. Replace ``MY_SPLUNK_ACCESS_TOKEN`` and ``MY_SPLUNK_REALM`` with valid values. You can pin the image version to a specific version instead of ``latest`` if you want to avoid automatic upgrades. The Collector is configured to use the default configuration file ``/etc/otel/collector/ecs_ec2_config.yaml``. The Collector image Dockerfile is available at :new-page:`Dockerfile ` and the contents of the default configuration file can be seen at :new-page:`ECS EC2 configuration `. -.. note:: - - You do not need the ``smartagent/ecs-metadata`` metrics receiver in the default configuration file if all you want is tracing. You can take the default configuration, remove the receiver, then use the configuration in a custom configuration following the directions in :ref:`ecs-ec2-custom-config`. - -The configured network mode for the task is ``host``. This means that task metadata endpoint version 2 used by the ``smartagent/ecs-metadata`` receiver is not activated by default. See :new-page:`task metadata endpoint ` to determine if task metadata endpoint version 3 is activated by default for your task. If this version is activated, then add the following to the environment list in the task definition: +Notes: -.. code-block:: none +* You do not need the ``awsecscontainermetrics`` receiver in the default configuration file if all you want is tracing. You can take the default configuration, remove the receiver, then use the configuration in a custom configuration following the directions in :ref:`ecs-ec2-custom-config`. - { - "name": "ECS_TASK_METADATA_ENDPOINT", - "value": "${ECS_CONTAINER_METADATA_URI}/task" - }, - { - "name": "ECS_TASK_STATS_ENDPOINT", - "value": "${ECS_CONTAINER_METADATA_URI}/task/stats" - } +* To exclude metrics assign them as a stringified array to environment variable ``METRICS_TO_EXCLUDE``. -Assign a stringified array of metrics you want excluded to environment variable ``METRICS_TO_EXCLUDE``. You can set the memory limit for the ``memory_limiter`` processor using environment variable ``SPLUNK_MEMORY_LIMIT_MIB``. The default memory limit is 512 MiB. - -Launch the Collector -============================= -The Collector is designed to be run as a Daemon service in an EC2 ECS cluster. To create a Collector service from the Amazon ECS console: - -#. Go to your cluster in the console. -#. Select :guilabel:`Services`. -#. Select :guilabel:`Create`. -#. Select the following options: - #. Launch Type: EC2 - #. Task Definition (Family): splunk-otel-collector - #. Task Definition (Revision): 1 (or whatever the latest is in your case) - #. Service Name: splunk-otel-collector - #. Service type: DAEMON - #. Leave everything else at default. -#. Select :guilabel:`Next step`. -#. Leave everything on this next page at their defaults and select :guilabel:`Next step`. -#. Leave everything on this next page at their defaults and select :guilabel:`Next step`. -#. Select :guilabel:`Create Service` to deploy the Collector onto each node in the ECS cluster. You should see infrastructure and docker metrics flowing soon. +* You can set the memory limit for the ``memory_limiter`` processor using environment variable ``SPLUNK_MEMORY_LIMIT_MIB``. The default memory limit is 512 MiB. .. _ecs-ec2-custom-config: Use a custom configuration ============================== + To use a custom configuration file, replace the value of the ``SPLUNK_CONFIG`` environment variable with the file path of the custom configuration file in the Collector task definition. Alternatively, you can specify the custom configuration YAML directly using the ``SPLUNK_CONFIG_YAML`` environment variable, as described in :ref:`ecs-observer-config`. .. _ecs-observer-config: -``ecs_observer`` configuration +Configure ``ecs_observer`` -------------------------------- -Use extension Amazon Elastic Container Service Observer (``ecs_observer``) in your custom configuration to discover metrics targets in running tasks, filtered by service names, task definitions, and container labels. ``ecs_observer`` is currently limited to Prometheus targets and requires the read-only permissions below. You can add the permissions to the task role by adding them to a customer-managed policy that is attached to the task role. -.. code-block:: yaml +Use extension Amazon Elastic Container Service Observer (``ecs_observer``) in your custom configuration to discover metrics targets in running tasks, filtered by service names, task definitions, and container labels. ``ecs_observer`` is currently limited to Prometheus targets and requires the read-only permissions below. The Collector should be configured to run as an ECS Daemon. You can add the permissions to the task role by adding them to a customer-managed policy that is attached to the task role. +.. code-block:: yaml ecs:List* ecs:Describe* @@ -108,7 +75,6 @@ The results are written to ``/etc/ecs_sd_targets.yaml``. The ``prometheus`` rece .. code-block:: yaml - extensions: ecs_observer: refresh_interval: 10s @@ -147,6 +113,26 @@ The results are written to ``/etc/ecs_sd_targets.yaml``. The ``prometheus`` rece .. _aws-parameter-store: +Launch the Collector as a Daemon +-------------------------------------------- + +To launch the Collector from the Amazon ECS console: + +#. Go to your cluster in the console. +#. Select :guilabel:`Services`. +#. Select :guilabel:`Create`. +#. Select the following options: + #. Launch Type: EC2 + #. Task Definition (Family): splunk-otel-collector + #. Task Definition (Revision): 1 (or whatever the latest is in your case) + #. Service Name: splunk-otel-collector + #. Service type: DAEMON + #. Leave everything else at default. +#. Select :guilabel:`Next step`. +#. Leave everything on this next page at their defaults and select :guilabel:`Next step`. +#. Leave everything on this next page at their defaults and select :guilabel:`Next step`. +#. Select :guilabel:`Create Service` to deploy the Collector onto each node in the ECS cluster. You should see infrastructure and docker metrics flowing soon. + Use the AWS Parameter Store ---------------------------- diff --git a/index.rst b/index.rst index 95e9fc165..59f27acb4 100644 --- a/index.rst +++ b/index.rst @@ -505,6 +505,11 @@ To keep up to date with changes in the products, see the Splunk Observability Cl Introduction to alerts and detectors +.. toctree:: + :maxdepth: 3 + + Best practices for detectors + .. toctree:: :maxdepth: 3 @@ -843,7 +848,18 @@ To keep up to date with changes in the products, see the Splunk Observability Cl .. toctree:: :maxdepth: 3 - Configure your tests TOGGLE + Advanced test configurations TOGGLE + +.. toctree:: + :maxdepth: 3 + + Troubleshoot tests TOGGLE + + +.. toctree:: + :maxdepth: 3 + + Troubleshoot tests .. toctree:: :caption: Splunk On-Call diff --git a/release-notes/2024-10-01-rn.rst b/release-notes/2024-10-01-rn.rst index d502ccdf2..a4ff1c1c5 100644 --- a/release-notes/2024-10-01-rn.rst +++ b/release-notes/2024-10-01-rn.rst @@ -66,4 +66,19 @@ Service level objective (SLO) * - New feature or enhancement - Description * - SignalFlow editor for custom metrics SLO - - You can use SignalFlow to define metrics and filters when creating a custom metric SLO. For more information, see :ref:`create-slo`. The feature released on October 2, 2024. \ No newline at end of file + - You can use SignalFlow to define metrics and filters when creating a custom metric SLO. For more information, see :ref:`create-slo`. The feature released on October 2, 2024. + +.. _auth-2024-10-01: + +Authentication +============== + +.. list-table:: + :header-rows: 1 + :widths: 1 2 + :width: 100% + + * - New feature or enhancement + - Description + * - Token management improvements + - Admin and power users have a new token management interface that includes long-lived tokens, improved token visibility and rotation, and a design that is aligned with Splunk Cloud Platform. For more information, see :ref:`admin-org-tokens`. The feature released on October 23, 2024. \ No newline at end of file diff --git a/release-notes/release-notes-overview.rst b/release-notes/release-notes-overview.rst index 954d85540..c978e825b 100644 --- a/release-notes/release-notes-overview.rst +++ b/release-notes/release-notes-overview.rst @@ -32,6 +32,7 @@ Each release date includes new features and enhancements for SaaS and versioned * :ref:`Data ingest ` * :ref:`Data management ` * :ref:`Service level objective ` + * :ref:`Token management improvements ` .. _changelogs: diff --git a/synthetics/api-test/api-test.rst b/synthetics/api-test/api-test.rst index 274bb8507..97a7c0f12 100644 --- a/synthetics/api-test/api-test.rst +++ b/synthetics/api-test/api-test.rst @@ -1,7 +1,7 @@ .. _api-test: ************************************ -Use an API Test to test an endpoint +API Tests for endpoint ************************************ .. meta:: diff --git a/synthetics/browser-test/browser-test-metrics.rst b/synthetics/browser-test/browser-test-metrics.rst index bb707f5fd..84960e1f5 100644 --- a/synthetics/browser-test/browser-test-metrics.rst +++ b/synthetics/browser-test/browser-test-metrics.rst @@ -162,7 +162,7 @@ Performance timing metrics capture information about how long it takes resources Web vitals ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ -Web vitals capture key metrics that affect user experience. +Web vitals are metrics that represent user experience in terms of loading, interactivity, and visual stability. .. list-table:: :header-rows: 1 @@ -172,17 +172,17 @@ Web vitals capture key metrics that affect user experience. - :strong:`Source metric name` - :strong:`Description` - * - Cumulative layout shift (CLS) - - ``synthetics.webvitals_cls.score`` - - Measures page stability. CLS is based on a formula that tallies up how many times the components on the page move or “shift” around while the page is loading. Fewer shifts are better. - * - Largest contentful paint (LCP) - ``synthetics.webvitals_lcp.time.ms`` - Measures page loading times as perceived by users. The LCP metric reports the render time of the largest content element visible within the viewport. * - Total blocking time (TBT) - ``synthetics.webvitals_tbt.time.ms`` - - Captures issues that affect interactivity. TBT is a synthetic alternative for First Input Delay (INP), which measures page responsiveness to user input. Optimizations that improve TBT in the lab can also help improve INP for your users. + - Captures issues that affect interactivity. TBT is a synthetic alternative for Interaction to Next Paint (INP), which measures page responsiveness to user input. Optimizations that improve TBT in the lab can also help improve INP for your users. + + * - Cumulative layout shift (CLS) + - ``synthetics.webvitals_cls.score`` + - Measures page stability. CLS is based on a formula that tallies up how many times the components on the page move or “shift” around while the page is loading. Fewer shifts are better. To learn more about web vitals, see :new-page:`https://web.dev/vitals/` in the Google developer documentation. diff --git a/synthetics/browser-test/browser-test-results.rst b/synthetics/browser-test/browser-test-results.rst index 8988cfd59..64492fb8f 100644 --- a/synthetics/browser-test/browser-test-results.rst +++ b/synthetics/browser-test/browser-test-results.rst @@ -1,7 +1,7 @@ .. _browser-test-results: *********************************************** -Interpret Browser Test results +Interpret Browser test results *********************************************** .. meta:: diff --git a/synthetics/browser-test/browser-test.rst b/synthetics/browser-test/browser-test.rst index a349f953b..51b2ec22a 100644 --- a/synthetics/browser-test/browser-test.rst +++ b/synthetics/browser-test/browser-test.rst @@ -1,7 +1,7 @@ .. _browser-test: **************************************** -Use a Browser test to test a webpage +Browser tests for webpages **************************************** .. meta:: @@ -21,7 +21,7 @@ You can configure tests on a schedule so you're continually monitoring your site .. raw:: html -

What happens during a Browser test?

+

What does a Browser test monitor?

During a Browser test, Splunk Synthetic Monitoring continuously collects performance data including metrics, network data, and custom user timings. All requests and responses that occur in the test are captured in a HAR file, which is represented visually in a waterfall chart that illustrates the latency of specific resources on the page. See :ref:`waterfall-chart` to learn more about the waterfall chart, and see :ref:`browser-metrics` to learn about the metrics in a Browser test. diff --git a/synthetics/set-up-synthetics/set-up-synthetics.rst b/synthetics/set-up-synthetics/set-up-synthetics.rst index 5ba829a2d..5f5521428 100644 --- a/synthetics/set-up-synthetics/set-up-synthetics.rst +++ b/synthetics/set-up-synthetics/set-up-synthetics.rst @@ -11,6 +11,30 @@ Set up Splunk Synthetic Monitoring Monitor the performance of your web pages and applications by running synthetic Browser, Uptime, and API tests. These tests let you proactively alert the relevant teams when a site or user flow they manage becomes unavailable, as well as report on the performance of a site or user flow over time. Splunk Synthetic Monitoring does not require extensive installation and setup: you can get started by creating your first test directly in the Splunk Synthetic Monitoring user interface. +.. _synth-configure-app: + +Get your site ready to run synthetic tests +============================================ + +.. meta:: + :description: Information about the settings you need to configure for your application or site in order to receive traffic from Splunk Synthetic Monitoring. + +There are a couple of settings you might need to add to your application or webpage to receive traffic from Splunk Synthetic Monitoring. + + +Allow Splunk Synthetic Monitoring IP addresses +------------------------------------------------- + +Splunk Synthetic Monitoring runs synthetic tests from a set of dedicated IP addresses. To ensure your internal network or web application firewall (WAF) does not block this traffic, place these IP addresses on your browser or site's allow list. + +See :ref:`public-locations` for the list of Splunk Synthetic Monitoring IP addresses, and then refer to your internal network's documentation for instructions on how to add them to your allow list. + +Exclude Splunk Synthetic Monitoring from analytics +---------------------------------------------------- +If you use a web analytics tool to monitor traffic on your website or application, you might want to exclude Splunk Synthetic Monitoring IP addresses from being counted as traffic. + +To do so, filter Splunk Synthetic Monitoring IP addresses in the settings of your web analytics tool. See :ref:`public-locations` for the list of IP addresses, and then refers to your analytics tool's documentation for instructions on how to filter them. + Choose a test ============================================================ @@ -116,12 +140,6 @@ For more examples on Java instrumentation, see :ref:`server-trace-information-ja Integrate with Splunk RUM so that you can automatically measure Web Vital metrics against your run results. Web vitals capture key metrics that affect user experience and assess the overall performance of your site. For more, see :ref:`rum-synth`. -(Optional) Configure your application ------------------------------------------------------------------------- - - -If you use Splunk Synthetic Monitoring to monitor an application or website with allow/block lists or a web analytics tool, you might want to adjust the settings to accommodate traffic from Splunk Synthetic Monitoring. See :ref:`synth-configure-app` for detailed instructions. - Continue learning ============================== diff --git a/synthetics/syn-troubleshoot/syn-missing-alerts.rst b/synthetics/syn-troubleshoot/syn-missing-alerts.rst new file mode 100644 index 000000000..0dbf7eee3 --- /dev/null +++ b/synthetics/syn-troubleshoot/syn-missing-alerts.rst @@ -0,0 +1,10 @@ +.. _syn-missing-alerts: + +********************************************************* +Troubleshoot missing alerts +********************************************************* + +.. meta:: + :description: Troubleshoot broken tests + +Troubleshoot missing alerts in Synthetic tests. diff --git a/synthetics/syn-troubleshoot/syn-troubleshoot.rst b/synthetics/syn-troubleshoot/syn-troubleshoot.rst new file mode 100644 index 000000000..e78aef97e --- /dev/null +++ b/synthetics/syn-troubleshoot/syn-troubleshoot.rst @@ -0,0 +1,38 @@ +.. _syn-troubleshoot: + +**************************************** +Troubleshoot broken tests +**************************************** + +.. meta:: + :description: Troubleshoot broken tests + + + +There are a number of reasons why your tests might fail like issues with test validation or application unresponsiveness. For example, + +* API endpoint was unreachable +* URL was unreachable +* UI element wasn't found +* Default wait time of 10 seconds is too short for step assertions to complete. A test might fail because it takes longer than 10 seconds for a website to load. + +Troubleshoot test validation +=============================== + +Follow these guidelines to troubleshoot a broken test. + +#. (Optional) Make a copy of the test so that you can check various solutions before fixing the original test. +#. Open the test page and see when the test started to fail. Consider the following questions: + + * When did the check fail? Is there a pattern among other failed runs? + * Does the check fail consistently on the same step, or intermittently? + * Is this the first time the check has failed on this step? Did you make a recent change to the test? + * Was the failure tied to a specific location or across all locations? + +#. Open the run results view of a failed test, find the step that is failing and go to the link. +#. Open inspect element. +#. Duplicate the step and repeat the steps in your test until you find the broken step. +#. Verify that there is one instance only of the selector you want to use in your test. If the selector appears more than once your test might break again in the future. Unique selectors provide optimal test performance. +#. Update your tests with your findings. + + diff --git a/synthetics/test-config/synth-configure-app.rst b/synthetics/test-config/synth-configure-app.rst deleted file mode 100644 index 861645624..000000000 --- a/synthetics/test-config/synth-configure-app.rst +++ /dev/null @@ -1,24 +0,0 @@ -.. _synth-configure-app: - -******************************************************************************* -Configure your site to accommodate synthetic tests -******************************************************************************* - -.. meta:: - :description: Information about the settings you need to configure for your application or site in order to receive traffic from Splunk Synthetic Monitoring. - -There are a couple of configurations you might need to set up for your application or webpage to receive traffic from Splunk Synthetic Monitoring. - -Allow Splunk Synthetic Monitoring IP addresses -================================================ - -Splunk Synthetic Monitoring runs synthetic tests from a set of dedicated IP addresses. To ensure your internal network or web application firewall (WAF) does not block this traffic, place these IP addresses on your browser or site's allow list. - -See :ref:`public-locations` for the list of Splunk Synthetic Monitoring IP addresses, and then refer to your internal network's documentation for instructions on how to add them to your allow list. - -Exclude Splunk Synthetic Monitoring from analytics -=================================================== -If you use a web analytics tool to monitor traffic on your website or application, you might want to exclude Splunk Synthetic Monitoring IP addresses from being counted as traffic. - -To do so, filter Splunk Synthetic Monitoring IP addresses in the settings of your web analytics tool. See :ref:`public-locations` for the list of IP addresses, and then refer to your analytics tool's documentation for instructions on how to filter them. - diff --git a/synthetics/test-config/test-config.rst b/synthetics/test-config/test-config.rst index 8b0958705..e4a8e0089 100644 --- a/synthetics/test-config/test-config.rst +++ b/synthetics/test-config/test-config.rst @@ -1,7 +1,7 @@ .. _test-config: *************************************************** -Manage synthetic tests +Advanced test configurations *************************************************** .. meta:: @@ -9,7 +9,6 @@ Manage synthetic tests .. toctree:: - synth-configure-app synth-alerts built-in-variables global-variables @@ -110,24 +109,13 @@ Choosing informative names for your tests and alerts helps organize content. Her :alt: This image shows two Browser tests with the prefix [ButtercupGames]. -======================================================================================== -Troubleshoot broken tests -======================================================================================== -Follow these guidelines to troubleshoot a broken test. +================================ +Troubleshoot broken tests +================================ -#. (Optional) Make a copy of the test so that you can check various solutions before fixing the original test. -#. Open the test page and see when the test started to fail. Consider the following questions: +See, :ref:`syn-troubleshoot`. - * When did the check fail? Is there a pattern among other failed runs? - * Does the check fail consistently on the same step, or intermittently? - * Is this the first time the check has failed on this step? Did you make a recent change to the test? - * Was the failure tied to a specific location or across all locations? -#. Open the run results view of a failed test, find the step that is failing and go to the link. -#. Open inspect element. -#. Duplicate the step and repeat the steps in your test until you find the broken step. -#. Verify that there is one instance only of the selector you want to use in your test. If the selector appears more than once your test might break again in the future. Unique selectors provide optimal test performance. -#. Update your tests with your findings. ======================================================================================== Filter tests diff --git a/synthetics/uptime-test/uptime-test.rst b/synthetics/uptime-test/uptime-test.rst index 2872863e9..afa21464b 100644 --- a/synthetics/uptime-test/uptime-test.rst +++ b/synthetics/uptime-test/uptime-test.rst @@ -2,7 +2,7 @@ .. _uptime-test: ************************************************** -Use an Uptime Test to test port or HTTP uptime +Uptime Tests for port and HTTP ************************************************** .. meta::