Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[7.17] [Maps] Update [email protected] #188121

Merged
merged 1 commit into from
Jul 12, 2024

Update [email protected]

ececa4b
Select commit
Loading
Failed to load commit list.
Merged

[7.17] [Maps] Update [email protected] #188121

Update [email protected]
ececa4b
Select commit
Loading
Failed to load commit list.
checks-reporter / X-Pack Chrome Functional tests / Group 18 succeeded Jul 11, 2024 in 34m 43s

node scripts/functional_tests --bail --kibana-install-dir /opt/local-ssd/buildkite/builds/bk-agent-prod-gcp-1720714350198800445/elastic/kibana-pull-request/kibana-build-xpack --include-tag ciGroup18

Details

[truncated]
]]).
   │ proc [kibana]   log   [16:45:18.223] [info][plugins][ruleRegistry] Installed resources for index .alerts-observability.uptime.alerts
   │ info [o.e.c.m.MetadataIndexTemplateService] [ftr] adding component template [.alerts-observability.logs.alerts-mappings]
   │ info [o.e.c.m.MetadataIndexTemplateService] [ftr] adding component template [.alerts-observability.metrics.alerts-mappings]
   │ proc [kibana]   log   [16:45:18.314] [info][plugins][ruleRegistry] Installed resources for index .alerts-observability.logs.alerts
   │ proc [kibana]   log   [16:45:18.407] [info][plugins][ruleRegistry] Installed resources for index .alerts-observability.metrics.alerts
   │ info [o.e.c.m.MetadataCreateIndexService] [ftr] [.kibana_security_session_1] creating index, cause [api], templates [.kibana_security_session_index_template_1], shards [1]/[0]
   │ info [o.e.c.r.a.AllocationService] [ftr] Cluster health status changed from [YELLOW] to [GREEN] (reason: [shards started [[.kibana_security_session_1][0]]]).
   │ info [o.e.c.m.MetadataMappingService] [ftr] [.kibana_task_manager_7.17.23_001/YYuOOi73R7q0MGmaz-Wvjg] update_mapping [_doc]
   │ info [o.e.c.m.MetadataMappingService] [ftr] [.kibana_7.17.23_001/MZooI-fERfS9OIPa29yw4g] update_mapping [_doc]
   │ info [o.e.c.m.MetadataMappingService] [ftr] [.kibana_7.17.23_001/MZooI-fERfS9OIPa29yw4g] update_mapping [_doc]
   │ info [o.e.c.m.MetadataMappingService] [ftr] [.kibana_7.17.23_001/MZooI-fERfS9OIPa29yw4g] update_mapping [_doc]
   │ info [o.e.x.i.a.TransportPutLifecycleAction] [ftr] adding index lifecycle policy [kibana-event-log-policy]
   │ info [o.e.c.m.MetadataIndexTemplateService] [ftr] adding index template [.kibana-event-log-7.17.23-snapshot-template] for index patterns [.kibana-event-log-7.17.23-snapshot-*]
   │ info [o.e.c.m.MetadataCreateIndexService] [ftr] [.kibana-event-log-7.17.23-snapshot-000001] creating index, cause [api], templates [.kibana-event-log-7.17.23-snapshot-template], shards [1]/[1]
   │ info [o.e.c.r.a.AllocationService] [ftr] updating number_of_replicas to [0] for indices [.kibana-event-log-7.17.23-snapshot-000001]
   │ info [o.e.x.i.IndexLifecycleTransition] [ftr] moving index [.kibana-event-log-7.17.23-snapshot-000001] from [null] to [{"phase":"new","action":"complete","name":"complete"}] in policy [kibana-event-log-policy]
   │ info [o.e.c.r.a.AllocationService] [ftr] Cluster health status changed from [YELLOW] to [GREEN] (reason: [shards started [[.kibana-event-log-7.17.23-snapshot-000001][0]]]).
   │ info [o.e.x.i.IndexLifecycleTransition] [ftr] moving index [.kibana-event-log-7.17.23-snapshot-000001] from [{"phase":"new","action":"complete","name":"complete"}] to [{"phase":"hot","action":"unfollow","name":"branch-check-unfollow-prerequisites"}] in policy [kibana-event-log-policy]
   │ info [o.e.x.i.IndexLifecycleTransition] [ftr] moving index [.kibana-event-log-7.17.23-snapshot-000001] from [{"phase":"hot","action":"unfollow","name":"branch-check-unfollow-prerequisites"}] to [{"phase":"hot","action":"rollover","name":"check-rollover-ready"}] in policy [kibana-event-log-policy]
   │ proc [kibana]   log   [16:45:19.802] [info][chromium][plugins][reporting] Browser executable: /opt/local-ssd/buildkite/builds/bk-agent-prod-gcp-1720714350198800445/elastic/kibana-pull-request/kibana-build-xpack/x-pack/plugins/reporting/chromium/headless_shell-linux_x64/headless_shell
   │ proc [kibana]   log   [16:45:19.842] [info][plugins][reporting][store] Creating ILM policy for managing reporting indices: kibana-reporting
   │ info [o.e.x.i.a.TransportPutLifecycleAction] [ftr] adding index lifecycle policy [kibana-reporting]
   │ proc [kibana]   log   [16:45:19.990] [info][plugins][securitySolution] Dependent plugin setup complete - Starting ManifestTask
   │ proc [kibana]   log   [16:45:20.894] [info][0][1][endpoint:metadata-check-transforms-task:0][plugins][securitySolution] no endpoint metadata transforms found
   │ info [o.e.c.m.MetadataMappingService] [ftr] [.kibana_7.17.23_001/MZooI-fERfS9OIPa29yw4g] update_mapping [_doc]
   │ info [o.e.c.m.MetadataCreateIndexService] [ftr] [.ds-ilm-history-5-2024.07.11-000001] creating index, cause [initialize_data_stream], templates [ilm-history], shards [1]/[0]
   │ info [o.e.c.m.MetadataCreateDataStreamService] [ftr] adding data stream [ilm-history-5] with write index [.ds-ilm-history-5-2024.07.11-000001], backing indices [], and aliases []
   │ info [o.e.x.i.IndexLifecycleTransition] [ftr] moving index [.ds-ilm-history-5-2024.07.11-000001] from [null] to [{"phase":"new","action":"complete","name":"complete"}] in policy [ilm-history-ilm-policy]
   │ info [o.e.c.r.a.AllocationService] [ftr] Cluster health status changed from [YELLOW] to [GREEN] (reason: [shards started [[.ds-ilm-history-5-2024.07.11-000001][0]]]).
   │ info [o.e.x.i.IndexLifecycleTransition] [ftr] moving index [.ds-ilm-history-5-2024.07.11-000001] from [{"phase":"new","action":"complete","name":"complete"}] to [{"phase":"hot","action":"unfollow","name":"branch-check-unfollow-prerequisites"}] in policy [ilm-history-ilm-policy]
   │ info [o.e.x.i.IndexLifecycleTransition] [ftr] moving index [.ds-ilm-history-5-2024.07.11-000001] from [{"phase":"hot","action":"unfollow","name":"branch-check-unfollow-prerequisites"}] to [{"phase":"hot","action":"rollover","name":"check-rollover-ready"}] in policy [ilm-history-ilm-policy]
   │ proc [kibana]   log   [16:45:25.129] [info][status] Kibana is now available (was degraded)
   │ info Only running suites which are compatible with ES version 7.17.23
   │ info Only running suites (and their sub-suites) if they include the tag(s): [ 'ciGroup18' ]
   │ info [o.e.x.s.a.r.TransportPutRoleAction] [ftr] updated role [system_indices_superuser]
   │ info [o.e.x.s.a.u.TransportPutUserAction] [ftr] updated user [system_indices_superuser]
   │ info Only running suites which are compatible with ES version 7.17.23
   │ info Only running suites (and their sub-suites) if they include the tag(s): [ 'ciGroup18' ]
   │ info Starting tests
   │ warn debug logs are being captured, only error logs will be written to the console
   │
     └-: security APIs - SAML
       └-> "before all" hook: beforeTestSuite.trigger in "security APIs - SAML"
       └-: SAML authentication
         └-> "before all" hook: beforeTestSuite.trigger for "should reject API requests if client is not authenticated"
         └-> should reject API requests if client is not authenticated
           └-> "before each" hook: global before each for "should reject API requests if client is not authenticated"
           └- ✓ pass  (37ms)
         └-> does not prevent basic login
           └-> "before each" hook: global before each for "does not prevent basic login"
           └- ✓ pass  (854ms)
         └-: initiating handshake
           └-> "before all" hook: beforeTestSuite.trigger for "should redirect user to a page that would capture URL fragment"
           └-> should redirect user to a page that would capture URL fragment
             └-> "before each" hook: global before each for "should redirect user to a page that would capture URL fragment"
             └- ✓ pass  (10ms)
           └-> should properly set cookie and redirect user to IdP
             └-> "before each" hook: global before each for "should properly set cookie and redirect user to IdP"
             └- ✓ pass  (968ms)
           └-> should not allow access to the API with the handshake cookie
             └-> "before each" hook: global before each for "should not allow access to the API with the handshake cookie"
             └- ✓ pass  (1.0s)
           └-> AJAX requests should not initiate handshake
             └-> "before each" hook: global before each for "AJAX requests should not initiate handshake"
             └- ✓ pass  (21ms)
           └-> "after all" hook: afterTestSuite.trigger for "AJAX requests should not initiate handshake"
         └-: finishing handshake
           └-> "before all" hook: beforeTestSuite.trigger for "should fail if SAML response is not complemented with handshake cookie"
           └-> should fail if SAML response is not complemented with handshake cookie
             └-> "before each" hook: global before each for "should fail if SAML response is not complemented with handshake cookie"
             └-> "before each" hook for "should fail if SAML response is not complemented with handshake cookie"
             └- ✓ pass  (141ms)
           └-> should succeed if both SAML response and handshake cookie are provided
             └-> "before each" hook: global before each for "should succeed if both SAML response and handshake cookie are provided"
             └-> "before each" hook for "should succeed if both SAML response and handshake cookie are provided"
             └- ✓ pass  (3.1s)
           └-> should succeed in case of IdP initiated login
             └-> "before each" hook: global before each for "should succeed in case of IdP initiated login"
             └-> "before each" hook for "should succeed in case of IdP initiated login"
             └- ✓ pass  (1.0s)
           └-> should fail if SAML response is not valid
             └-> "before each" hook: global before each for "should fail if SAML response is not valid"
             └-> "before each" hook for "should fail if SAML response is not valid"
             └- ✓ pass  (1.0s)
           └-> "after all" hook: afterTestSuite.trigger for "should fail if SAML response is not valid"
         └-: API access with active session
           └-> "before all" hook: beforeTestSuite.trigger for "should extend cookie on every successful non-system API call"
           └-> should extend cookie on every successful non-system API call
             └-> "before each" hook: global before each for "should extend cookie on every successful non-system API call"
             └-> "before each" hook for "should extend cookie on every successful non-system API call"
             └- ✓ pass  (52ms)
           └-> should not extend cookie for system API calls
             └-> "before each" hook: global before each for "should not extend cookie for system API calls"
             └-> "before each" hook for "should not extend cookie for system API calls"
             └- ✓ pass  (33ms)
           └-> should fail and preserve session cookie if unsupported authentication schema is used
             └-> "before each" hook: global before each for "should fail and preserve session cookie if unsupported authentication schema is used"
             └-> "before each" hook for "should fail and preserve session cookie if unsupported authentication schema is used"
             └- ✓ pass  (45ms)
           └-> "after all" hook: afterTestSuite.trigger for "should fail and preserve session cookie if unsupported authentication schema is used"
         └-: logging out
           └-> "before all" hook: beforeTestSuite.trigger for "should redirect to IdP with SAML request to complete logout"
           └-> should redirect to IdP with SAML request to complete logout
             └-> "before each" hook: global before each for "should redirect to IdP with SAML request to complete logout"
             └-> "before each" hook for "should redirect to IdP with SAML request to complete logout"
             └- ✓ pass  (2.2s)
           └-> should redirect to `logged_out` page if session cookie is not provided
             └-> "before each" hook: global before each for "should redirect to `logged_out` page if session cookie is not provided"
             └-> "before each" hook for "should redirect to `logged_out` page if session cookie is not provided"
             └- ✓ pass  (9ms)
           └-> should reject AJAX requests
             └-> "before each" hook: global before each for "should reject AJAX requests"
             └-> "before each" hook for "should reject AJAX requests"
             └- ✓ pass  (21ms)
           └-> should invalidate access token on IdP initiated logout
             └-> "before each" hook: global before each for "should invalidate access token on IdP initiated logout"
             └-> "before each" hook for "should invalidate access token on IdP initiated logout"
             └- ✓ pass  (2.1s)
           └-> should invalidate access token on IdP initiated logout even if there is no Kibana session
             └-> "before each" hook: global before each for "should invalidate access token on IdP initiated logout even if there is no Kibana session"
             └-> "before each" hook for "should invalidate access token on IdP initiated logout even if there is no Kibana session"
             └- ✓ pass  (1.2s)
           └-> "after all" hook: afterTestSuite.trigger for "should invalidate access token on IdP initiated logout even if there is no Kibana session"
         └-: API access with expired access token.
           └-> "before all" hook: beforeTestSuite.trigger for "expired access token should be automatically refreshed"
           └-> expired access token should be automatically refreshed
             └-> "before each" hook: global before each for "expired access token should be automatically refreshed"
             └-> "before each" hook for "expired access token should be automatically refreshed"
             └- ✓ pass  (1.1s)
           └-> should refresh access token even if multiple concurrent requests try to refresh it
             └-> "before each" hook: global before each for "should refresh access token even if multiple concurrent requests try to refresh it"
             └-> "before each" hook for "should refresh access token even if multiple concurrent requests try to refresh it"
             └- ✓ pass  (1.0s)
           └-> "after all" hook: afterTestSuite.trigger for "should refresh access token even if multiple concurrent requests try to refresh it"
         └-: API access with missing access token document.
           └-> "before all" hook: beforeTestSuite.trigger for "should redirect user to a page that would capture URL fragment"
           └-> should redirect user to a page that would capture URL fragment
             └-> "before each" hook: global before each for "should redirect user to a page that would capture URL fragment"
             └-> "before each" hook for "should redirect user to a page that would capture URL fragment"
             └- ✓ pass  (905ms)
           └-> should properly set cookie and redirect user to IdP
             └-> "before each" hook: global before each for "should properly set cookie and redirect user to IdP"
             └-> "before each" hook for "should properly set cookie and redirect user to IdP"
             └- ✓ pass  (997ms)
           └-> should start new SAML handshake even if multiple concurrent requests try to refresh access token
             └-> "before each" hook: global before each for "should start new SAML handshake even if multiple concurrent requests try to refresh access token"
             └-> "before each" hook for "should start new SAML handshake even if multiple concurrent requests try to refresh access token"
             └- ✓ pass  (995ms)
           └-> "after all" hook: afterTestSuite.trigger for "should start new SAML handshake even if multiple concurrent requests try to refresh access token"
         └-: IdP initiated login with active session
           └-> "before all" hook: beforeTestSuite.trigger for "should renew session and redirect to the home page if login is for the same user when access token is valid"
           └-> should renew session and redirect to the home page if login is for the same user when access token is valid
             └-> "before each" hook: global before each for "should renew session and redirect to the home page if login is for the same user when access token is valid"
             └-> "before each" hook for "should renew session and redirect to the home page if login is for the same user when access token is valid"
             └- ✓ pass  (4.1s)
           └-> should create a new session and redirect to the `overwritten_session` if login is for another user when access token is valid
             └-> "before each" hook: global before each for "should create a new session and redirect to the `overwritten_session` if login is for another user when access token is valid"
             └-> "before each" hook for "should create a new session and redirect to the `overwritten_session` if login is for another user when access token is valid"
             └- ✓ pass  (5.1s)
           └-> should renew session and redirect to the home page if login is for the same user when access token is expired
             └-> "before each" hook: global before each for "should renew session and redirect to the home page if login is for the same user when access token is expired"
             └-> "before each" hook for "should renew session and redirect to the home page if login is for the same user when access token is expired"
             └- ✓ pass  (24.1s)
           └-> should create a new session and redirect to the `overwritten_session` if login is for another user when access token is expired
             └-> "before each" hook: global before each for "should create a new session and redirect to the `overwritten_session` if login is for another user when access token is expired"
             └-> "before each" hook for "should create a new session and redirect to the `overwritten_session` if login is for another user when access token is expired"
             └- ✓ pass  (25.1s)
           └-> should renew session and redirect to the home page if login is for the same user when access token document is missing
             └-> "before each" hook: global before each for "should renew session and redirect to the home page if login is for the same user when access token document is missing"
             └-> "before each" hook for "should renew session and redirect to the home page if login is for the same user when access token document is missing"
             └- ✓ pass  (2.1s)
           └-> should create a new session and redirect to the `overwritten_session` if login is for another user when access token document is missing
             └-> "before each" hook: global before each for "should create a new session and redirect to the `overwritten_session` if login is for another user when access token document is missing"
             └-> "before each" hook for "should create a new session and redirect to the `overwritten_session` if login is for another user when access token document is missing"
             └- ✓ pass  (3.1s)
           └-> "after all" hook: afterTestSuite.trigger for "should create a new session and redirect to the `overwritten_session` if login is for another user when access token document is missing"
         └-> "after all" hook: afterTestSuite.trigger for "does not prevent basic login"
       └-> "after all" hook: afterTestSuite.trigger in "security APIs - SAML"
   │
   │29 passing (3.0m)
   │
   │ proc [kibana]   log   [16:48:32.251] [info][plugins-system][standard] Stopping all plugins.
   │ proc [kibana]   log   [16:48:32.253] [info][kibana-monitoring][monitoring][monitoring][plugins] Monitoring stats collection is stopped
   │ info [kibana] exited with null after 211.7 seconds
   │ info [es] stopping node ftr
   │ info [o.e.x.m.p.NativeController] [ftr] Native controller process has stopped - no new native processes can be started
   │ info [o.e.n.Node] [ftr] stopping ...
   │ info [o.e.x.w.WatcherService] [ftr] stopping watch service, reason [shutdown initiated]
   │ info [o.e.x.w.WatcherLifeCycleService] [ftr] watcher has stopped and shutdown
   │ info [o.e.n.Node] [ftr] stopped
   │ info [o.e.n.Node] [ftr] closing ...
   │ info [o.e.n.Node] [ftr] closed
   │ info [es] stopped
   │ info [es] no debug files found, assuming es did not write any
   │ info [es] cleanup complete
--- [3/3] Running x-pack/test/security_api_integration/session_idle.config.ts
 info Installing from snapshot
   │ info version: 7.17.23
   │ info install path: /opt/local-ssd/buildkite/builds/bk-agent-prod-gcp-1720714350198800445/elastic/kibana-pull-request/kibana/.es/job-kibana-default-ciGroup18-cluster-ftr
   │ info license: trial
   │ info Downloading snapshot manifest from https://storage.googleapis.com/kibana-ci-es-snapshots-daily/7.17.23/archives/20240710-131256_a9e35d20/manifest.json
   │ info verifying cache of https://storage.googleapis.com/kibana-ci-es-snapshots-daily/7.17.23/archives/20240710-131256_a9e35d20/elasticsearch-7.17.23-SNAPSHOT-linux-x86_64.tar.gz
   │ info etags match, reusing cache from 2024-07-11T16:19:54.285Z
   │ info extracting /opt/local-ssd/buildkite/builds/bk-agent-prod-gcp-1720714350198800445/elastic/kibana-pull-request/kibana/.es/cache/elasticsearch-7.17.23-SNAPSHOT-linux-x86_64.tar.gz
   │ info extracted to /opt/local-ssd/buildkite/builds/bk-agent-prod-gcp-1720714350198800445/elastic/kibana-pull-request/kibana/.es/job-kibana-default-ciGroup18-cluster-ftr
   │ info created /opt/local-ssd/buildkite/builds/bk-agent-prod-gcp-1720714350198800445/elastic/kibana-pull-request/kibana/.es/job-kibana-default-ciGroup18-cluster-ftr/ES_TMPDIR
   │ info setting secure setting bootstrap.password to changeme
 info [es] starting node ftr on port 9220
 info Starting
   │ info moved /opt/local-ssd/buildkite/builds/bk-agent-prod-gcp-1720714350198800445/elastic/kibana-pull-request/kibana/x-pack/test/security_api_integration/fixtures/saml/idp_metadata.xml in config to /opt/local-ssd/buildkite/builds/bk-agent-prod-gcp-1720714350198800445/elastic/kibana-pull-request/kibana/.es/job-kibana-default-ciGroup18-cluster-ftr/config/idp_metadata.xml
   │ERROR Jul 11, 2024 4:48:43 PM sun.util.locale.provider.LocaleProviderAdapter <clinit>
   │      WARNING: COMPAT locale provider will be removed in a future release
   │      
   │ info [o.e.n.Node] [ftr] version[7.17.23-SNAPSHOT], pid[7516], build[default/tar/a9e35d20ce79ab145c56c9440320f9c9a38cd305/2024-07-10T13:07:56.388486095Z], OS[Linux/5.15.0-1062-gcp/amd64], JVM[Oracle Corporation/OpenJDK 64-Bit Server VM/22.0.1/22.0.1+8-16]
   │ info [o.e.n.Node] [ftr] JVM home [/opt/local-ssd/buildkite/builds/bk-agent-prod-gcp-1720714350198800445/elastic/kibana-pull-request/kibana/.es/job-kibana-default-ciGroup18-cluster-ftr/jdk], using bundled JDK [true]
   │ info [o.e.n.Node] [ftr] JVM arguments [-Xshare:auto, -Des.networkaddress.cache.ttl=60, -Des.networkaddress.cache.negative.ttl=10, -XX:+AlwaysPreTouch, -Xss1m, -Djava.awt.headless=true, -Dfile.encoding=UTF-8, -Djna.nosys=true, -XX:-OmitStackTraceInFastThrow, -XX:+ShowCodeDetailsInExceptionMessages, -Dio.netty.noUnsafe=true, -Dio.netty.noKeySetOptimization=true, -Dio.netty.recycler.maxCapacityPerThread=0, -Dio.netty.allocator.numDirectArenas=0, -Dlog4j.shutdownHookEnabled=false, -Dlog4j2.disable.jmx=true, -Dlog4j2.formatMsgNoLookups=true, -Djava.locale.providers=SPI,COMPAT, --add-opens=java.base/java.io=ALL-UNNAMED, -Djava.security.manager=allow, -XX:+UseG1GC, -Djava.io.tmpdir=/opt/local-ssd/buildkite/builds/bk-agent-prod-gcp-1720714350198800445/elastic/kibana-pull-request/kibana/.es/job-kibana-default-ciGroup18-cluster-ftr/ES_TMPDIR, -XX:+HeapDumpOnOutOfMemoryError, -XX:+ExitOnOutOfMemoryError, -XX:HeapDumpPath=data, -XX:ErrorFile=logs/hs_err_pid%p.log, -Xlog:gc*,gc+age=trace,safepoint:file=logs/gc.log:utctime,pid,tags:filecount=32,filesize=64m, -XX:+UnlockDiagnosticVMOptions, -XX:G1NumCollectionsKeepPinned=10000000, -Xms1536m, -Xmx1536m, -XX:MaxDirectMemorySize=805306368, -XX:G1HeapRegionSize=4m, -XX:InitiatingHeapOccupancyPercent=30, -XX:G1ReservePercent=15, -Des.path.home=/opt/local-ssd/buildkite/builds/bk-agent-prod-gcp-1720714350198800445/elastic/kibana-pull-request/kibana/.es/job-kibana-default-ciGroup18-cluster-ftr, -Des.path.conf=/opt/local-ssd/buildkite/builds/bk-agent-prod-gcp-1720714350198800445/elastic/kibana-pull-request/kibana/.es/job-kibana-default-ciGroup18-cluster-ftr/config, -Des.distribution.flavor=default, -Des.distribution.type=tar, -Des.bundled_jdk=true]
   │ info [o.e.n.Node] [ftr] version [7.17.23-SNAPSHOT] is a pre-release version of Elasticsearch and is not suitable for production
   │ info [o.e.p.PluginsService] [ftr] loaded module [aggs-matrix-stats]
   │ info [o.e.p.PluginsService] [ftr] loaded module [analysis-common]
   │ info [o.e.p.PluginsService] [ftr] loaded module [constant-keyword]
   │ info [o.e.p.PluginsService] [ftr] loaded module [frozen-indices]
   │ info [o.e.p.PluginsService] [ftr] loaded module [ingest-common]
   │ info [o.e.p.PluginsService] [ftr] loaded module [ingest-geoip]
   │ info [o.e.p.PluginsService] [ftr] loaded module [ingest-user-agent]
   │ info [o.e.p.PluginsService] [ftr] loaded module [kibana]
   │ info [o.e.p.PluginsService] [ftr] loaded module [lang-expression]
   │ info [o.e.p.PluginsService] [ftr] loaded module [lang-mustache]
   │ info [o.e.p.PluginsService] [ftr] loaded module [lang-painless]
   │ info [o.e.p.PluginsService] [ftr] loaded module [legacy-geo]
   │ info [o.e.p.PluginsService] [ftr] loaded module [mapper-extras]
   │ info [o.e.p.PluginsService] [ftr] loaded module [mapper-version]
   │ info [o.e.p.PluginsService] [ftr] loaded module [parent-join]
   │ info [o.e.p.PluginsService] [ftr] loaded module [percolator]
   │ info [o.e.p.PluginsService] [ftr] loaded module [rank-eval]
   │ info [o.e.p.PluginsService] [ftr] loaded module [reindex]
   │ info [o.e.p.PluginsService] [ftr] loaded module [repositories-metering-api]
   │ info [o.e.p.PluginsService] [ftr] loaded module [repository-encrypted]
   │ info [o.e.p.PluginsService] [ftr] loaded module [repository-url]
   │ info [o.e.p.PluginsService] [ftr] loaded module [runtime-fields-common]
   │ info [o.e.p.PluginsService] [ftr] loaded module [search-business-rules]
   │ info [o.e.p.PluginsService] [ftr] loaded module [searchable-snapshots]
   │ info [o.e.p.PluginsService] [ftr] loaded module [snapshot-repo-test-kit]
   │ info [o.e.p.PluginsService] [ftr] loaded module [spatial]
   │ info [o.e.p.PluginsService] [ftr] loaded module [test-delayed-aggs]
   │ info [o.e.p.PluginsService] [ftr] loaded module [test-die-with-dignity]
   │ info [o.e.p.PluginsService] [ftr] loaded module [test-error-query]
   │ info [o.e.p.PluginsService] [ftr] loaded module [transform]
   │ info [o.e.p.PluginsService] [ftr] loaded module [transport-netty4]
   │ info [o.e.p.PluginsService] [ftr] loaded module [unsigned-long]
   │ info [o.e.p.PluginsService] [ftr] loaded module [vector-tile]
   │ info [o.e.p.PluginsService] [ftr] loaded module [vectors]
   │ info [o.e.p.PluginsService] [ftr] loaded module [wildcard]
   │ info [o.e.p.PluginsService] [ftr] loaded module [x-pack-aggregate-metric]
   │ info [o.e.p.PluginsService] [ftr] loaded module [x-pack-analytics]
   │ info [o.e.p.PluginsService] [ftr] loaded module [x-pack-async]
   │ info [o.e.p.PluginsService] [ftr] loaded module [x-pack-async-search]
   │ info [o.e.p.PluginsService] [ftr] loaded module [x-pack-autoscaling]
   │ info [o.e.p.PluginsService] [ftr] loaded module [x-pack-ccr]
   │ info [o.e.p.PluginsService] [ftr] loaded module [x-pack-core]
   │ info [o.e.p.PluginsService] [ftr] loaded module [x-pack-data-streams]
   │ info [o.e.p.PluginsService] [ftr] loaded module [x-pack-deprecation]
   │ info [o.e.p.PluginsService] [ftr] loaded module [x-pack-enrich]
   │ info [o.e.p.PluginsService] [ftr] loaded module [x-pack-eql]
   │ info [o.e.p.PluginsService] [ftr] loaded module [x-pack-fleet]
   │ info [o.e.p.PluginsService] [ftr] loaded module [x-pack-graph]
   │ info [o.e.p.PluginsService] [ftr] loaded module [x-pack-identity-provider]
   │ info [o.e.p.PluginsService] [ftr] loaded module [x-pack-ilm]
   │ info [o.e.p.PluginsService] [ftr] loaded module [x-pack-logstash]
   │ info [o.e.p.PluginsService] [ftr] loaded module [x-pack-ml]
   │ info [o.e.p.PluginsService] [ftr] loaded module [x-pack-monitoring]
   │ info [o.e.p.PluginsService] [ftr] loaded module [x-pack-ql]
   │ info [o.e.p.PluginsService] [ftr] loaded module [x-pack-rollup]
   │ info [o.e.p.PluginsService] [ftr] loaded module [x-pack-security]
   │ info [o.e.p.PluginsService] [ftr] loaded module [x-pack-shutdown]
   │ info [o.e.p.PluginsService] [ftr] loaded module [x-pack-sql]
   │ info [o.e.p.PluginsService] [ftr] loaded module [x-pack-stack]
   │ info [o.e.p.PluginsService] [ftr] loaded module [x-pack-text-structure]
   │ info [o.e.p.PluginsService] [ftr] loaded module [x-pack-voting-only-node]
   │ info [o.e.p.PluginsService] [ftr] loaded module [x-pack-watcher]
   │ info [o.e.p.PluginsService] [ftr] no plugins loaded
   │ info [o.e.e.NodeEnvironment] [ftr] using [1] data paths, mounts [[/opt/local-ssd (/dev/nvme0n1)]], net usable_space [343.5gb], net total_space [368gb], types [ext4]
   │ info [o.e.e.NodeEnvironment] [ftr] heap size [1.5gb], compressed ordinary object pointers [true]
   │ info [o.e.n.Node] [ftr] node name [ftr], node ID [l2fgmCu-RduQOOzZrnhi8Q], cluster name [job-kibana-default-ciGroup18-cluster-ftr], roles [transform, data_frozen, master, remote_cluster_client, data, ml, data_content, data_hot, data_warm, data_cold, ingest]
   │ info [o.e.x.m.p.l.CppLogMessageHandler] [ftr] [controller/7685] [Main.cc@122] controller (64 bit): Version 7.17.23-SNAPSHOT (Build 3e4489a02bea5d) Copyright (c) 2024 Elasticsearch BV
   │ info [o.o.c.c.InitializationService] [ftr] Initializing OpenSAML using the Java Services API
   │ info [o.o.x.a.AlgorithmRegistry] [ftr] Algorithm failed runtime support check, will not be usable: http://www.w3.org/2001/04/xmlenc#ripemd160
   │ info [o.o.x.a.AlgorithmRegistry] [ftr] Algorithm failed runtime support check, will not be usable: http://www.w3.org/2001/04/xmldsig-more#hmac-ripemd160
   │ info [o.o.x.a.AlgorithmRegistry] [ftr] Algorithm failed runtime support check, will not be usable: http://www.w3.org/2001/04/xmldsig-more#rsa-ripemd160
   │ info [o.o.s.m.r.i.AbstractReloadingMetadataResolver] [ftr] Metadata Resolver FilesystemMetadataResolver saml1: New metadata successfully loaded for '/opt/local-ssd/buildkite/builds/bk-agent-prod-gcp-1720714350198800445/elastic/kibana-pull-request/kibana/.es/job-kibana-default-ciGroup18-cluster-ftr/config/idp_metadata.xml'
   │ info [o.o.s.m.r.i.AbstractReloadingMetadataResolver] [ftr] Metadata Resolver FilesystemMetadataResolver saml1: Next refresh cycle for metadata provider '/opt/local-ssd/buildkite/builds/bk-agent-prod-gcp-1720714350198800445/elastic/kibana-pull-request/kibana/.es/job-kibana-default-ciGroup18-cluster-ftr/config/idp_metadata.xml' will occur on '2024-07-12T16:48:53.098Z' ('2024-07-12T16:48:53.098Z' local time)
   │ info [o.e.x.s.a.Realms] [ftr] license mode is [trial], currently licensed security realms are [reserved/reserved,native/native1,saml/saml1]
   │ info [o.e.x.s.a.s.FileRolesStore] [ftr] parsed [0] roles from file [/opt/local-ssd/buildkite/builds/bk-agent-prod-gcp-1720714350198800445/elastic/kibana-pull-request/kibana/.es/job-kibana-default-ciGroup18-cluster-ftr/config/roles.yml]
   │ info [o.e.i.g.ConfigDatabases] [ftr] initialized default databases [[GeoLite2-Country.mmdb, GeoLite2-City.mmdb, GeoLite2-ASN.mmdb]], config databases [[]] and watching [/opt/local-ssd/buildkite/builds/bk-agent-prod-gcp-1720714350198800445/elastic/kibana-pull-request/kibana/.es/job-kibana-default-ciGroup18-cluster-ftr/config/ingest-geoip] for changes
   │ info [o.e.i.g.DatabaseNodeService] [ftr] initialized database registry, using geoip-databases directory [/opt/local-ssd/buildkite/builds/bk-agent-prod-gcp-1720714350198800445/elastic/kibana-pull-request/kibana/.es/job-kibana-default-ciGroup18-cluster-ftr/ES_TMPDIR/geoip-databases/l2fgmCu-RduQOOzZrnhi8Q]
   │ info [o.e.t.NettyAllocator] [ftr] creating NettyAllocator with the following configs: [name=elasticsearch_configured, chunk_size=1mb, suggested_max_allocation_size=1mb, factors={es.unsafe.use_netty_default_chunk_and_page_size=false, g1gc_enabled=true, g1gc_region_size=4mb}]
   │ info [o.e.i.r.RecoverySettings] [ftr] using rate limit [40mb] with [default=40mb, read=0b, write=0b, max=0b]
   │ info [o.e.d.DiscoveryModule] [ftr] using discovery type [single-node] and seed hosts providers [settings]
   │ info [o.e.g.DanglingIndicesState] [ftr] gateway.auto_import_dangling_indices is disabled, dangling indices will not be automatically detected or imported and must be managed manually
   │ info [o.e.n.Node] [ftr] initialized
   │ info [o.e.n.Node] [ftr] starting ...
   │ info [o.e.x.s.c.f.PersistentCache] [ftr] persistent cache index loaded
   │ info [o.e.x.d.l.DeprecationIndexingComponent] [ftr] deprecation component started
   │ info [o.e.t.TransportService] [ftr] publish_address {127.0.0.1:9300}, bound_addresses {[::1]:9300}, {127.0.0.1:9300}
   │ info [o.e.x.m.Monitoring] [ftr] creating template [.monitoring-alerts-7] with version [7]
   │ info [o.e.x.m.Monitoring] [ftr] creating template [.monitoring-es] with version [7]
   │ info [o.e.x.m.Monitoring] [ftr] creating template [.monitoring-kibana] with version [7]
   │ info [o.e.x.m.Monitoring] [ftr] creating template [.monitoring-logstash] with version [7]
   │ info [o.e.x.m.Monitoring] [ftr] creating template [.monitoring-beats] with version [7]
   │ info [o.e.b.BootstrapChecks] [ftr] HTTPS is required in order to use the token service; please enable HTTPS using the [xpack.security.http.ssl.enabled] setting or disable the token service using the [xpack.security.authc.token.enabled] setting
   │ info [o.e.c.c.Coordinator] [ftr] setting initial configuration to VotingConfiguration{l2fgmCu-RduQOOzZrnhi8Q}
   │ info [o.e.c.s.MasterService] [ftr] elected-as-master ([1] nodes joined)[{ftr}{l2fgmCu-RduQOOzZrnhi8Q}{ASxIOm9ySdCKWcSaG37I6w}{127.0.0.1}{127.0.0.1:9300}{cdfhilmrstw} elect leader, _BECOME_MASTER_TASK_, _FINISH_ELECTION_], term: 1, version: 1, delta: master node changed {previous [], current [{ftr}{l2fgmCu-RduQOOzZrnhi8Q}{ASxIOm9ySdCKWcSaG37I6w}{127.0.0.1}{127.0.0.1:9300}{cdfhilmrstw}]}
   │ info [o.e.c.c.CoordinationState] [ftr] cluster UUID set to [pjimhmPuRVe8gjHFl0wKFA]
   │ info [o.e.c.s.ClusterApplierService] [ftr] master node changed {previous [], current [{ftr}{l2fgmCu-RduQOOzZrnhi8Q}{ASxIOm9ySdCKWcSaG37I6w}{127.0.0.1}{127.0.0.1:9300}{cdfhilmrstw}]}, term: 1, version: 1, reason: Publication{term=1, version=1}
   │ info [o.e.h.AbstractHttpServerTransport] [ftr] publish_address {127.0.0.1:9220}, bound_addresses {[::1]:9220}, {127.0.0.1:9220}
   │ info [o.e.n.Node] [ftr] started
   │ info [o.e.g.GatewayService] [ftr] recovered [0] indices into cluster_state
   │ info [o.e.x.s.s.SecurityIndexManager] [ftr] security index does not exist, creating [.security-7] with alias [.security]
   │ info [o.e.x.s.s.SecurityIndexManager] [ftr] security index does not exist, creating [.security-7] with alias [.security]
   │ info [o.e.x.s.s.SecurityIndexManager] [ftr] security index does not exist, creating [.security-7] with alias [.security]
   │ info [o.e.x.s.s.SecurityIndexManager] [ftr] security index does not exist, creating [.security-7] with alias [.security]
   │ info [o.e.x.s.s.SecurityIndexManager] [ftr] security index does not exist, creating [.security-7] with alias [.security]
   │ info [o.e.x.s.s.SecurityIndexManager] [ftr] security index does not exist, creating [.security-7] with alias [.security]
   │ info [o.e.c.m.MetadataIndexTemplateService] [ftr] adding index template [.ml-anomalies-] for index patterns [.ml-anomalies-*]
   │ info [o.e.x.s.s.SecurityIndexManager] [ftr] security index does not exist, creating [.security-7] with alias [.security]
   │ info [o.e.x.s.s.SecurityIndexManager] [ftr] security index does not exist, creating [.security-7] with alias [.security]
   │ info [o.e.c.m.MetadataIndexTemplateService] [ftr] adding index template [.ml-state] for index patterns [.ml-state*]
   │ info [o.e.c.m.MetadataIndexTemplateService] [ftr] adding index template [.ml-stats] for index patterns [.ml-stats-*]
   │ info [o.e.c.m.MetadataIndexTemplateService] [ftr] adding index template [.ml-notifications-000002] for index patterns [.ml-notifications-000002]
   │ info [o.e.c.m.MetadataIndexTemplateService] [ftr] adding component template [data-streams-mappings]
   │ info [o.e.c.m.MetadataIndexTemplateService] [ftr] adding component template [logs-mappings]
   │ info [o.e.c.m.MetadataIndexTemplateService] [ftr] adding component template [logs-settings]
   │ info [o.e.c.m.MetadataIndexTemplateService] [ftr] adding component template [metrics-mappings]
   │ info [o.e.c.m.MetadataIndexTemplateService] [ftr] adding component template [synthetics-mappings]
   │ info [o.e.c.m.MetadataIndexTemplateService] [ftr] adding component template [metrics-settings]
   │ info [o.e.c.m.MetadataIndexTemplateService] [ftr] adding component template [synthetics-settings]
   │ info [o.e.c.m.MetadataIndexTemplateService] [ftr] adding index template [.watch-history-13] for index patterns [.watcher-history-13*]
   │ info [o.e.c.m.MetadataIndexTemplateService] [ftr] adding index template [ilm-history] for index patterns [ilm-history-5*]
   │ info [o.e.c.m.MetadataIndexTemplateService] [ftr] adding component template [.deprecation-indexing-mappings]
   │ info [o.e.c.m.MetadataIndexTemplateService] [ftr] adding component template [.deprecation-indexing-settings]
   │ info [o.e.c.m.MetadataIndexTemplateService] [ftr] adding index template [.slm-history] for index patterns [.slm-history-5*]
   │ info [o.e.c.m.MetadataCreateIndexService] [ftr] [.security-7] creating index, cause [api], templates [], shards [1]/[0]
   │ info [o.e.c.m.MetadataIndexTemplateService] [ftr] adding index template [logs] for index patterns [logs-*-*]
   │ info [o.e.c.m.MetadataIndexTemplateService] [ftr] adding index template [metrics] for index patterns [metrics-*-*]
   │ info [o.e.c.m.MetadataIndexTemplateService] [ftr] adding index template [synthetics] for index patterns [synthetics-*-*]
   │ info [o.e.c.m.MetadataIndexTemplateService] [ftr] adding index template [.deprecation-indexing-template] for index patterns [.logs-deprecation.*]
   │ info [o.e.c.r.a.AllocationService] [ftr] Cluster health status changed from [YELLOW] to [GREEN] (reason: [shards started [[.security-7][0]]]).
   │ info [o.e.x.i.a.TransportPutLifecycleAction] [ftr] adding index lifecycle policy [ml-size-based-ilm-policy]
   │ info [o.e.x.i.a.TransportPutLifecycleAction] [ftr] adding index lifecycle policy [metrics]
   │ info [o.e.x.i.a.TransportPutLifecycleAction] [ftr] adding index lifecycle policy [logs]
   │ info [o.e.x.i.a.TransportPutLifecycleAction] [ftr] adding index lifecycle policy [synthetics]
   │ info [o.e.x.i.a.TransportPutLifecycleAction] [ftr] adding index lifecycle policy [7-days-default]
   │ info [o.e.x.i.a.TransportPutLifecycleAction] [ftr] adding index lifecycle policy [30-days-default]
   │ info [o.e.x.i.a.TransportPutLifecycleAction] [ftr] adding index lifecycle policy [180-days-default]
   │ info [o.e.x.i.a.TransportPutLifecycleAction] [ftr] adding index lifecycle policy [90-days-default]
   │ info [o.e.x.i.a.TransportPutLifecycleAction] [ftr] adding index lifecycle policy [365-days-default]
   │ info [o.e.x.i.a.TransportPutLifecycleAction] [ftr] adding index lifecycle policy [watch-history-ilm-policy]
   │ info [o.e.x.i.a.TransportPutLifecycleAction] [ftr] adding index lifecycle policy [ilm-history-ilm-policy]
   │ info [o.e.x.i.a.TransportPutLifecycleAction] [ftr] adding index lifecycle policy [slm-history-ilm-policy]
   │ info [o.e.x.i.a.TransportPutLifecycleAction] [ftr] adding index lifecycle policy [.deprecation-indexing-ilm-policy]
   │ info [o.e.x.i.a.TransportPutLifecycleAction] [ftr] adding index lifecycle policy [.fleet-actions-results-ilm-policy]
   │ info [o.e.l.LicenseService] [ftr] license [26ed7264-f47f-451a-abd6-ec6f70ffe693] mode [trial] - valid
   │ info [o.e.x.s.a.Realms] [ftr] license mode is [trial], currently licensed security realms are [reserved/reserved,native/native1,saml/saml1]
   │ info [o.e.x.s.s.SecurityStatusChangeListener] [ftr] Active license is now [TRIAL]; Security is enabled
   │ info [o.e.x.s.a.r.TransportPutRoleAction] [ftr] added role [system_indices_superuser]
   │ info [o.e.x.s.a.u.TransportPutUserAction] [ftr] added user [system_indices_superuser]
   │ info starting [kibana] > /opt/local-ssd/buildkite/builds/bk-agent-prod-gcp-1720714350198800445/elastic/kibana-pull-request/kibana-build-xpack/bin/kibana --logging.json=false --server.port=5620 --elasticsearch.hosts=http://localhost:9220 --elasticsearch.username=kibana_system --elasticsearch.password=changeme --data.search.aggs.shardDelay.enabled=true --security.showInsecureClusterWarning=false --telemetry.banner=false --telemetry.sendUsageTo=staging --server.maxPayload=1679958 --plugin-path=/opt/local-ssd/buildkite/builds/bk-agent-prod-gcp-1720714350198800445/elastic/kibana-pull-request/kibana/test/common/fixtures/plugins/newsfeed --newsfeed.service.urlRoot=http://localhost:5620 --newsfeed.service.pathTemplate=/api/_newsfeed-FTS-external-service-simulators/kibana/v{VERSION}.json --logging.appenders.deprecation.type=console --logging.appenders.deprecation.layout.type=json --logging.loggers[0].name=elasticsearch.deprecation --logging.loggers[0].level=all --logging.loggers[0].appenders[0]=deprecation --status.allowAnonymous=true --server.uuid=5b2de169-2785-441b-ae8c-186a1936b17d --xpack.maps.showMapsInspectorAdapter=true --xpack.maps.preserveDrawingBuffer=true --xpack.security.encryptionKey="wuGNaIhoMpk5sO4UBxgr3NyW1sFcLgIf" --xpack.encryptedSavedObjects.encryptionKey="DkdXazszSCYexXqz4YktBGHCRkV6hyNK" --xpack.discoverEnhanced.actions.exploreDataInContextMenu.enabled=true --savedObjects.maxImportPayloadBytes=10485760 --xpack.siem.enabled=true --map.proxyElasticMapsServiceInMaps=true --telemetry.optIn=true --xpack.fleet.enabled=true --xpack.fleet.agents.pollingRequestTimeout=5000 --xpack.data_enhanced.search.sessions.enabled=true --xpack.data_enhanced.search.sessions.notTouchedTimeout=15s --xpack.data_enhanced.search.sessions.trackingInterval=5s --xpack.data_enhanced.search.sessions.cleanupInterval=5s --xpack.ruleRegistry.write.enabled=true --xpack.security.session.idleTimeout=10s --xpack.security.session.cleanupInterval=20s --xpack.security.authc.providers={"basic":{"basic1":{"order":0}},"saml":{"saml_fallback":{"order":1,"realm":"saml1"},"saml_override":{"order":2,"realm":"saml1","session":{"idleTimeout":"2m"}},"saml_disable":{"order":3,"realm":"saml1","session":{"idleTimeout":0}}}}
   │ proc [kibana] Kibana is currently running with legacy OpenSSL providers enabled! For details and instructions on how to disable see https://www.elastic.co/guide/en/kibana/7.17/production.html#openssl-legacy-provider
   │ proc [kibana]   log   [16:49:12.544] [info][plugins-service] Plugin "metricsEntities" is disabled.
   │ proc [kibana]   log   [16:49:12.627] [info][server][Preboot][http] http server running at http://localhost:5620
   │ proc [kibana]   log   [16:49:12.676] [warning][config][deprecation] Starting in 8.0, the Kibana logging format will be changing. This may affect you if you are doing any special handling of your Kibana logs, such as ingesting logs into Elasticsearch for further analysis. If you are using the new logging configuration, you are already receiving logs in both old and new formats, and the old format will simply be going away. If you are not yet using the new logging configuration, the log format will change upon upgrade to 8.0. Beginning in 8.0, the format of JSON logs will be ECS-compatible JSON, and the default pattern log format will be configurable with our new logging system. Please refer to the documentation for more information about the new logging format.
   │ proc [kibana]   log   [16:49:12.677] [warning][config][deprecation] Configuring "xpack.fleet.enabled" is deprecated and will be removed in 8.0.0.
   │ proc [kibana]   log   [16:49:12.677] [warning][config][deprecation] You no longer need to configure "xpack.fleet.agents.pollingRequestTimeout".
   │ proc [kibana]   log   [16:49:12.678] [warning][config][deprecation] map.proxyElasticMapsServiceInMaps is deprecated and is no longer used
   │ proc [kibana]   log   [16:49:12.678] [warning][config][deprecation] The default mechanism for Reporting privileges will work differently in future versions, which will affect the behavior of this cluster. Set "xpack.reporting.roles.enabled" to "false" to adopt the future behavior before upgrading.
   │ proc [kibana]   log   [16:49:12.679] [warning][config][deprecation] Setting "security.showInsecureClusterWarning" has been replaced by "xpack.security.showInsecureClusterWarning"
   │ proc [kibana]   log   [16:49:12.679] [warning][config][deprecation] Users are automatically required to log in again after 30 days starting in 8.0. Override this value to change the timeout.
   │ proc [kibana]   log   [16:49:12.680] [warning][config][deprecation] Setting "xpack.siem.enabled" has been replaced by "xpack.securitySolution.enabled"
   │ proc [kibana]   log   [16:49:12.815] [info][plugins-system][standard] Setting up [114] plugins: [newsfeedFixtures,translations,licensing,globalSearch,globalSearchProviders,features,licenseApiGuard,code,usageCollection,xpackLegacy,taskManager,telemetryCollectionManager,telemetryCollectionXpack,kibanaUsageCollection,share,embeddable,uiActionsEnhanced,screenshotMode,banners,telemetry,newsfeed,mapsEms,mapsLegacy,kibanaLegacy,fieldFormats,expressions,dataViews,charts,esUiShared,bfetch,data,savedObjects,presentationUtil,expressionShape,expressionRevealImage,expressionRepeatImage,expressionMetric,expressionImage,customIntegrations,home,searchprofiler,painlessLab,grokdebugger,management,watcher,licenseManagement,advancedSettings,spaces,security,savedObjectsTagging,reporting,canvas,lists,ingestPipelines,fileUpload,encryptedSavedObjects,dataEnhanced,cloud,snapshotRestore,eventLog,actions,alerting,triggersActionsUi,transform,stackAlerts,ruleRegistry,visualizations,visTypeXy,visTypeVislib,visTypeVega,visTypeTimelion,visTypeTagcloud,visTypeTable,visTypePie,visTypeMetric,visTypeMarkdown,tileMap,regionMap,expressionTagcloud,expressionMetricVis,console,graph,fleet,indexManagement,remoteClusters,crossClusterReplication,indexLifecycleManagement,dashboard,maps,dashboardMode,dashboardEnhanced,visualize,visTypeTimeseries,rollup,indexPatternFieldEditor,lens,cases,timelines,discover,osquery,observability,discoverEnhanced,dataVisualizer,ml,uptime,securitySolution,infra,upgradeAssistant,monitoring,logstash,enterpriseSearch,apm,savedObjectsManagement,indexPatternManagement]
   │ proc [kibana]   log   [16:49:12.836] [info][plugins][taskManager] TaskManager is identified by the Kibana UUID: 5b2de169-2785-441b-ae8c-186a1936b17d
   │ proc [kibana]   log   [16:49:12.951] [warning][config][plugins][security] Session cookies will be transmitted over insecure connections. This is not recommended.
   │ proc [kibana]   log   [16:49:12.970] [warning][config][plugins][security] Session cookies will be transmitted over insecure connections. This is not recommended.
   │ proc [kibana]   log   [16:49:12.989] [warning][config][plugins][reporting] Generating a random key for xpack.reporting.encryptionKey. To prevent sessions from being invalidated on restart, please set xpack.reporting.encryptionKey in the kibana.yml or use the bin/kibana-encryption-keys command.
   │ proc [kibana]   log   [16:49:13.005] [info][encryptedSavedObjects][plugins] Hashed 'xpack.encryptedSavedObjects.encryptionKey' for this instance: nnkvE7kjGgidcjXzmLYBbIh4THhRWI1/7fUjAEaJWug=
   │ proc [kibana]   log   [16:49:13.041] [info][plugins][ruleRegistry] Installing common resources shared between all indices
   │ proc [kibana]   log   [16:49:13.488] [info][config][plugins][reporting] Chromium sandbox provides an additional layer of protection, and is supported for Linux Ubuntu 20.04 OS. Automatically enabling Chromium sandbox.
   │ proc [kibana]   log   [16:49:13.809] [info][savedobjects-service] Waiting until all Elasticsearch nodes are compatible with Kibana before starting saved objects migrations...
   │ proc [kibana]   log   [16:49:13.810] [info][savedobjects-service] Starting saved objects migrations
   │ proc [kibana]   log   [16:49:13.860] [info][savedobjects-service] [.kibana] INIT -> CREATE_NEW_TARGET. took: 15ms.
   │ proc [kibana]   log   [16:49:13.864] [info][savedobjects-service] [.kibana_task_manager] INIT -> CREATE_NEW_TARGET. took: 16ms.
   │ info [o.e.c.m.MetadataCreateIndexService] [ftr] [.kibana_task_manager_7.17.23_001] creating index, cause [api], templates [], shards [1]/[1]
   │ info [o.e.c.r.a.AllocationService] [ftr] updating number_of_replicas to [0] for indices [.kibana_task_manager_7.17.23_001]
   │ info [o.e.c.m.MetadataCreateIndexService] [ftr] [.kibana_7.17.23_001] creating index, cause [api], templates [], shards [1]/[1]
   │ info [o.e.c.r.a.AllocationService] [ftr] updating number_of_replicas to [0] for indices [.kibana_7.17.23_001]
   │ info [o.e.c.r.a.AllocationService] [ftr] Cluster health status changed from [YELLOW] to [GREEN] (reason: [shards started [[.kibana_7.17.23_001][0]]]).
   │ proc [kibana]   log   [16:49:14.092] [info][savedobjects-service] [.kibana_task_manager] CREATE_NEW_TARGET -> MARK_VERSION_INDEX_READY. took: 228ms.
   │ proc [kibana]   log   [16:49:14.133] [info][savedobjects-service] [.kibana] CREATE_NEW_TARGET -> MARK_VERSION_INDEX_READY. took: 273ms.
   │ proc [kibana]   log   [16:49:14.230] [info][savedobjects-service] [.kibana_task_manager] MARK_VERSION_INDEX_READY -> DONE. took: 138ms.
   │ proc [kibana]   log   [16:49:14.230] [info][savedobjects-service] [.kibana_task_manager] Migration completed after 382ms
   │ proc [kibana]   log   [16:49:14.263] [info][savedobjects-service] [.kibana] MARK_VERSION_INDEX_READY -> DONE. took: 130ms.
   │ proc [kibana]   log   [16:49:14.263] [info][savedobjects-service] [.kibana] Migration completed after 418ms
   │ proc [kibana]   log   [16:49:14.270] [info][plugins-system][standard] Starting [114] plugins: [newsfeedFixtures,translations,licensing,globalSearch,globalSearchProviders,features,licenseApiGuard,code,usageCollection,xpackLegacy,taskManager,telemetryCollectionManager,telemetryCollectionXpack,kibanaUsageCollection,share,embeddable,uiActionsEnhanced,screenshotMode,banners,telemetry,newsfeed,mapsEms,mapsLegacy,kibanaLegacy,fieldFormats,expressions,dataViews,charts,esUiShared,bfetch,data,savedObjects,presentationUtil,expressionShape,expressionRevealImage,expressionRepeatImage,expressionMetric,expressionImage,customIntegrations,home,searchprofiler,painlessLab,grokdebugger,management,watcher,licenseManagement,advancedSettings,spaces,security,savedObjectsTagging,reporting,canvas,lists,ingestPipelines,fileUpload,encryptedSavedObjects,dataEnhanced,cloud,snapshotRestore,eventLog,actions,alerting,triggersActionsUi,transform,stackAlerts,ruleRegistry,visualizations,visTypeXy,visTypeVislib,visTypeVega,visTypeTimelion,visTypeTagcloud,visTypeTable,visTypePie,visTypeMetric,visTypeMarkdown,tileMap,regionMap,expressionTagcloud,expressionMetricVis,console,graph,fleet,indexManagement,remoteClusters,crossClusterReplication,indexLifecycleManagement,dashboard,maps,dashboardMode,dashboardEnhanced,visualize,visTypeTimeseries,rollup,indexPatternFieldEditor,lens,cases,timelines,discover,osquery,observability,discoverEnhanced,dataVisualizer,ml,uptime,securitySolution,infra,upgradeAssistant,monitoring,logstash,enterpriseSearch,apm,savedObjectsManagement,indexPatternManagement]
   │ proc [kibana]   log   [16:49:15.471] [info][monitoring][monitoring][plugins] config sourced from: production cluster
   │ proc [kibana]   log   [16:49:16.701] [info][server][Kibana][http] http server running at http://localhost:5620
   │ proc [kibana]   log   [16:49:16.771] [info][status] Kibana is now degraded
   │ info [o.e.c.m.MetadataIndexTemplateService] [ftr] adding component template [.alerts-ecs-mappings]
   │ info [o.e.c.m.MetadataIndexTemplateService] [ftr] adding component template [.alerts-technical-mappings]
   │ info [o.e.c.m.MetadataCreateIndexService] [ftr] [.apm-agent-configuration] creating index, cause [api], templates [], shards [1]/[1]
   │ info [o.e.c.r.a.AllocationService] [ftr] updating number_of_replicas to [0] for indices [.apm-agent-configuration]
   │ proc [kibana]   log   [16:49:16.997] [info][kibana-monitoring][monitoring][monitoring][plugins] Starting monitoring stats collection
   │ info [o.e.c.m.MetadataCreateIndexService] [ftr] [.apm-custom-link] creating index, cause [api], templates [], shards [1]/[1]
   │ info [o.e.c.r.a.AllocationService] [ftr] updating number_of_replicas to [0] for indices [.apm-custom-link]
   │ info [o.e.c.m.MetadataIndexTemplateService] [ftr] adding index template [.kibana_security_session_index_template_1] for index patterns [.kibana_security_session_1]
   │ info [o.e.c.r.a.AllocationService] [ftr] Cluster health status changed from [YELLOW] to [GREEN] (reason: [shards started [[.apm-agent-configuration][0], [.apm-custom-link][0]]]).
   │ info [o.e.c.m.MetadataCreateIndexService] [ftr] [.kibana_security_session_1] creating index, cause [api], templates [.kibana_security_session_index_template_1], shards [1]/[0]
   │ info [o.e.c.r.a.AllocationService] [ftr] Cluster health status changed from [YELLOW] to [GREEN] (reason: [shards started [[.kibana_security_session_1][0]]]).
   │ info [o.e.c.m.MetadataMappingService] [ftr] [.kibana_task_manager_7.17.23_001/sBKE2KE5TA-GA_rQa0-LRQ] update_mapping [_doc]
   │ info [o.e.c.m.MetadataMappingService] [ftr] [.kibana_7.17.23_001/5JqRjbXTTN6e7-vTNG-Qnw] update_mapping [_doc]
   │ info [o.e.c.m.MetadataMappingService] [ftr] [.kibana_7.17.23_001/5JqRjbXTTN6e7-vTNG-Qnw] update_mapping [_doc]
   │ info [o.e.c.m.MetadataMappingService] [ftr] [.kibana_7.17.23_001/5JqRjbXTTN6e7-vTNG-Qnw] update_mapping [_doc]
   │ info [o.e.x.i.a.TransportPutLifecycleAction] [ftr] adding index lifecycle policy [.alerts-ilm-policy]
   │ info [o.e.x.i.a.TransportPutLifecycleAction] [ftr] adding index lifecycle policy [kibana-event-log-policy]
   │ proc [kibana]   log   [16:49:18.155] [info][plugins][ruleRegistry] Installed common resources shared between all indices
   │ proc [kibana]   log   [16:49:18.157] [info][plugins][ruleRegistry] Installing resources for index .alerts-observability.uptime.alerts
   │ proc [kibana]   log   [16:49:18.157] [info][plugins][ruleRegistry] Installing resources for index .alerts-observability.logs.alerts
   │ proc [kibana]   log   [16:49:18.158] [info][plugins][ruleRegistry] Installing resources for index .alerts-observability.metrics.alerts
   │ proc [kibana]   log   [16:49:18.158] [info][plugins][ruleRegistry] Installing resources for index .alerts-observability.apm.alerts
   │ info [o.e.c.m.MetadataIndexTemplateService] [ftr] adding component template [.alerts-observability.metrics.alerts-mappings]
   │ proc [kibana]   log   [16:49:18.257] [info][plugins][ruleRegistry] Installed resources for index .alerts-observability.metrics.alerts
   │ info [o.e.c.m.MetadataIndexTemplateService] [ftr] adding component template [.alerts-observability.logs.alerts-mappings]
   │ proc [kibana]   log   [16:49:18.303] [info][plugins][ruleRegistry] Installed resources for index .alerts-observability.logs.alerts
   │ info [o.e.c.m.MetadataIndexTemplateService] [ftr] adding component template [.alerts-observability.uptime.alerts-mappings]
   │ proc [kibana]   log   [16:49:18.360] [info][plugins][ruleRegistry] Installed resources for index .alerts-observability.uptime.alerts
   │ info [o.e.c.m.MetadataIndexTemplateService] [ftr] adding component template [.alerts-observability.apm.alerts-mappings]
   │ proc [kibana]   log   [16:49:18.403] [info][plugins][ruleRegistry] Installed resources for index .alerts-observability.apm.alerts
   │ info [o.e.c.m.MetadataIndexTemplateService] [ftr] adding index template [.kibana-event-log-7.17.23-snapshot-template] for index patterns [.kibana-event-log-7.17.23-snapshot-*]
   │ info [o.e.c.m.MetadataCreateIndexService] [ftr] [.kibana-event-log-7.17.23-snapshot-000001] creating index, cause [api], templates [.kibana-event-log-7.17.23-snapshot-template], shards [1]/[1]
   │ info [o.e.c.r.a.AllocationService] [ftr] updating number_of_replicas to [0] for indices [.kibana-event-log-7.17.23-snapshot-000001]
   │ info [o.e.c.r.a.AllocationService] [ftr] Cluster health status changed from [YELLOW] to [GREEN] (reason: [shards started [[.kibana-event-log-7.17.23-snapshot-000001][0]]]).
   │ info [o.e.x.i.IndexLifecycleTransition] [ftr] moving index [.kibana-event-log-7.17.23-snapshot-000001] from [null] to [{"phase":"new","action":"complete","name":"complete"}] in policy [kibana-event-log-policy]
   │ info [o.e.x.i.IndexLifecycleTransition] [ftr] moving index [.kibana-event-log-7.17.23-snapshot-000001] from [{"phase":"new","action":"complete","name":"complete"}] to [{"phase":"hot","action":"unfollow","name":"branch-check-unfollow-prerequisites"}] in policy [kibana-event-log-policy]
   │ info [o.e.x.i.IndexLifecycleTransition] [ftr] moving index [.kibana-event-log-7.17.23-snapshot-000001] from [{"phase":"hot","action":"unfollow","name":"branch-check-unfollow-prerequisites"}] to [{"phase":"hot","action":"rollover","name":"check-rollover-ready"}] in policy [kibana-event-log-policy]
   │ proc [kibana]   log   [16:49:18.970] [info][chromium][plugins][reporting] Browser executable: /opt/local-ssd/buildkite/builds/bk-agent-prod-gcp-1720714350198800445/elastic/kibana-pull-request/kibana-build-xpack/x-pack/plugins/reporting/chromium/headless_shell-linux_x64/headless_shell
   │ proc [kibana]   log   [16:49:19.005] [info][plugins][reporting][store] Creating ILM policy for managing reporting indices: kibana-reporting
   │ info [o.e.x.i.a.TransportPutLifecycleAction] [ftr] adding index lifecycle policy [kibana-reporting]
   │ proc [kibana]   log   [16:49:19.074] [info][plugins][securitySolution] Dependent plugin setup complete - Starting ManifestTask
   │ proc [kibana]   log   [16:49:19.999] [info][0][1][endpoint:metadata-check-transforms-task:0][plugins][securitySolution] no endpoint metadata transforms found
   │ info [o.e.c.m.MetadataMappingService] [ftr] [.kibana_7.17.23_001/5JqRjbXTTN6e7-vTNG-Qnw] update_mapping [_doc]
   │ info [o.e.c.m.MetadataCreateIndexService] [ftr] [.ds-ilm-history-5-2024.07.11-000001] creating index, cause [initialize_data_stream], templates [ilm-history], shards [1]/[0]
   │ info [o.e.c.m.MetadataCreateDataStreamService] [ftr] adding data stream [ilm-history-5] with write index [.ds-ilm-history-5-2024.07.11-000001], backing indices [], and aliases []
   │ info [o.e.c.r.a.AllocationService] [ftr] Cluster health status changed from [YELLOW] to [GREEN] (reason: [shards started [[.ds-ilm-history-5-2024.07.11-000001][0]]]).
   │ info [o.e.x.i.IndexLifecycleTransition] [ftr] moving index [.ds-ilm-history-5-2024.07.11-000001] from [null] to [{"phase":"new","action":"complete","name":"complete"}] in policy [ilm-history-ilm-policy]
   │ info [o.e.x.i.IndexLifecycleTransition] [ftr] moving index [.ds-ilm-history-5-2024.07.11-000001] from [{"phase":"new","action":"complete","name":"complete"}] to [{"phase":"hot","action":"unfollow","name":"branch-check-unfollow-prerequisites"}] in policy [ilm-history-ilm-policy]
   │ info [o.e.x.i.IndexLifecycleTransition] [ftr] moving index [.ds-ilm-history-5-2024.07.11-000001] from [{"phase":"hot","action":"unfollow","name":"branch-check-unfollow-prerequisites"}] to [{"phase":"hot","action":"rollover","name":"check-rollover-ready"}] in policy [ilm-history-ilm-policy]
   │ proc [kibana]   log   [16:49:24.227] [info][status] Kibana is now available (was degraded)
   │ info Only running suites which are compatible with ES version 7.17.23
   │ info Only running suites (and their sub-suites) if they include the tag(s): [ 'ciGroup18' ]
   │ info [o.e.x.s.a.r.TransportPutRoleAction] [ftr] updated role [system_indices_superuser]
   │ info [o.e.x.s.a.u.TransportPutUserAction] [ftr] updated user [system_indices_superuser]
   │ info [o.e.x.s.a.u.TransportPutUserAction] [ftr] added user [test_user]
   │ info Only running suites which are compatible with ES version 7.17.23
   │ info Only running suites (and their sub-suites) if they include the tag(s): [ 'ciGroup18' ]
   │ info Starting tests
   │ warn debug logs are being captured, only error logs will be written to the console
   │
     └-: security APIs - Session Idle
       └-> "before all" hook: beforeTestSuite.trigger in "security APIs - Session Idle"
       └-: Session Idle cleanup
         └-> "before all" hook: beforeTestSuite.trigger for "should properly clean up session expired because of idle timeout"
         └-> should properly clean up session expired because of idle timeout
           └-> "before each" hook: global before each for "should properly clean up session expired because of idle timeout"
           └-> "before each" hook for "should properly clean up session expired because of idle timeout"
           └- ✓ pass  (1.0m)
         └-> should properly clean up session expired because of idle timeout when providers override global session config
           └-> "before each" hook: global before each for "should properly clean up session expired because of idle timeout when providers override global session config"
           └-> "before each" hook for "should properly clean up session expired because of idle timeout when providers override global session config"
           └- ✓ pass  (1.0m)
         └-> should not clean up session if user is active
           └-> "before each" hook: global before each for "should not clean up session if user is active"
           └-> "before each" hook for "should not clean up session if user is active"
           └- ✓ pass  (1.0m)
         └-> "after all" hook: afterTestSuite.trigger for "should not clean up session if user is active"
       └-: Session
         └-> "before all" hook: beforeTestSuite.trigger in "Session"
         └-: GET /internal/security/session
           └-> "before all" hook: beforeTestSuite.trigger for "should return current session information"
           └-> should return current session information
             └-> "before each" hook: global before each for "should return current session information"
             └-> "before each" hook for "should return current session information"
             └- ✓ pass  (29ms)
           └-> should not extend the session
             └-> "before each" hook: global before each for "should not extend the session"
             └-> "before each" hook for "should not extend the session"
             └- ✓ pass  (53ms)
           └-> "after all" hook: afterTestSuite.trigger for "should not extend the session"
         └-: POST /internal/security/session
           └-> "before all" hook: beforeTestSuite.trigger for "should redirect to GET"
           └-> should redirect to GET
             └-> "before each" hook: global before each for "should redirect to GET"
             └-> "before each" hook for "should redirect to GET"
             └- ✓ pass  (22ms)
           └-> should extend the session
             └-> "before each" hook: global before each for "should extend the session"
             └-> "before each" hook for "should extend the session"
             └- ✓ pass  (86ms)
           └-> "after all" hook: afterTestSuite.trigger for "should extend the session"
         └-> "after all" hook: afterTestSuite.trigger in "Session"
       └-> "after all" hook: afterTestSuite.trigger in "security APIs - Session Idle"
   │
   │7 passing (3.0m)
   │
   │ proc [kibana]   log   [16:52:48.634] [info][plugins-system][standard] Stopping all plugins.
   │ proc [kibana]   log   [16:52:48.636] [info][kibana-monitoring][monitoring][monitoring][plugins] Monitoring stats collection is stopped
   │ info [kibana] exited with null after 228.5 seconds
   │ info [es] stopping node ftr
   │ info [o.e.x.m.p.NativeController] [ftr] Native controller process has stopped - no new native processes can be started
   │ info [o.e.n.Node] [ftr] stopping ...
   │ info [o.e.x.w.WatcherService] [ftr] stopping watch service, reason [shutdown initiated]
   │ info [o.e.x.w.WatcherLifeCycleService] [ftr] watcher has stopped and shutdown
   │ info [o.e.n.Node] [ftr] stopped
   │ info [o.e.n.Node] [ftr] closing ...
   │ info [o.e.n.Node] [ftr] closed
   │ info [es] stopped
   │ info [es] no debug files found, assuming es did not write any
   │ info [es] cleanup complete