Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Feature/prometheus #415

Closed
wants to merge 78 commits into from
Closed

Conversation

V3ckt0r
Copy link

@V3ckt0r V3ckt0r commented Dec 13, 2017

Hey guys,

Could I get an eye over this and thoughts/comments. Please test this out via:

K6 metric names exposed to Prometheus:

# HELP k6_data_received_total Data received in bytes
# TYPE k6_data_received_total gauge
k6_data_received_total
# HELP k6_data_sent_total Data sent in bytes
# TYPE k6_data_sent_total gauge
k6_data_sent_total
# HELP k6_exporter_build_info A metric with a constant '1' value labeled by version, revision, branch, and goversion from which k6_exporter was built.
# TYPE k6_exporter_build_info gauge
k6_exporter_build_info{branch="",goversion="go1.9.2",revision="",version=""}
# HELP k6_http_req_blocked Time (ms) spent blocked (waiting for a free TCP connection slot) before initiating request.
# TYPE k6_http_req_blocked gauge
k6_http_req_blocked{type="avg"}
k6_http_req_blocked{type="max"}
k6_http_req_blocked{type="min"} 
k6_http_req_blocked{type="p(90)"} 
k6_http_req_blocked{type="p(95)"} 
# HELP k6_http_req_connecting_total Time (ms) spent establishing TCP connection to remote host
# TYPE k6_http_req_connecting_total gauge
k6_http_req_connecting_total{type="avg"}
k6_http_req_connecting_total{type="max"}
k6_http_req_connecting_total{type="min"} 
k6_http_req_connecting_total{type="p(90)"} 
k6_http_req_connecting_total{type="p(95)"}
# HELP k6_http_req_duration_total Total time (ms) for request, excluding time spent blocked, DNS lookup and TCP connect time
# TYPE k6_http_req_duration_total gauge
k6_http_req_duration_total{type="avg"} 
k6_http_req_duration_total{type="max"}
k6_http_req_duration_total{type="min"}
k6_http_req_duration_total{type="p(90)"} 
k6_http_req_duration_total{type="p(95)"}
# HELP k6_http_req_receiving_total Time (ms) spent establishing TCP connection to remote host
# TYPE k6_http_req_receiving_total gauge
k6_http_req_receiving_total{type="avg"}
k6_http_req_receiving_total{type="max"} 
k6_http_req_receiving_total{type="min"} 
k6_http_req_receiving_total{type="p(90)"} 
k6_http_req_receiving_total{type="p(95)"} 
# HELP k6_http_req_sending_total Time (ms) spent sending data to remote host
# TYPE k6_http_req_sending_total gauge
k6_http_req_sending_total{type="avg"} 
k6_http_req_sending_total{type="max"} 
k6_http_req_sending_total{type="min"} 
k6_http_req_sending_total{type="p(90)"} 
k6_http_req_sending_total{type="p(95)"} 
# HELP k6_http_req_waiting_total Time (ms) spent waiting for response from remote host (a.k.a. 'time to first byte', or 'TTFB')
# TYPE k6_http_req_waiting_total gauge
k6_http_req_waiting_total{type="avg"} 
k6_http_req_waiting_total{type="max"} 
k6_http_req_waiting_total{type="min"} 
k6_http_req_waiting_total{type="p(90)"} 
k6_http_req_waiting_total{type="p(95)"}
# HELP k6_http_reqs_total How many HTTP requests has k6 generated, in total
# TYPE k6_http_reqs_total gauge
k6_http_reqs_total
# HELP k6_iterations_total The aggregate number of times the VUs in the test have executed the JS script
# TYPE k6_iterations_total gauge
k6_iterations_total
# HELP k6_up Could k6 be reached
# TYPE k6_up gauge
k6_up
# HELP k6_vus_length Current number of active virtual users
# TYPE k6_vus_length gauge
k6_vus_length

#343

)

func HandlePrometheusMetrics() http.Handler {

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No need for this newline!

exporter := NewExporter(scrapeURI)
prometheus.MustRegister(exporter)
prometheus.MustRegister(version.NewCollector("k6_exporter"))

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could honestly drop this one too.

api/server.go Outdated
)

func NewHandler() http.Handler {
mux := http.NewServeMux()
mux.Handle("/v1/", v1.NewHandler())
mux.Handle("/ping", HandlePing())
mux.Handle("/", HandlePing())
mux.Handle("/metrics", prometheus.HandlePrometheusMetrics())
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Really minor nitpick: this would look better above "/".

@liclac
Copy link
Contributor

liclac commented Dec 15, 2017

Here's a thought: instead of having Prometheus be a regular collector, since we're always listening on a port for the API anyhow, we could have it be a part of the API server. The API server already has a connection to the Engine, which has a built-in view of the current state of all metrics; if you can lazily request all metrics from there (no overhead if it's never called), we could have it always available that way.

(I believe we can even export stuff like go runtime stats via prometheus? Because that would come in real handy for profiling. Or we could just make them metrics and hide them behind a flag, I suppose.)

@V3ckt0r
Copy link
Author

V3ckt0r commented Dec 16, 2017

hey @liclac, thanks for taking a look.

Yea we could do. I did debate this, ultimately I ended up opting for using the API server, as I thought any transformations that had been done to the individual metrics would already be handled (such as units ms or μs), and would mean the Prom code would not need to be changed/refactored for this reason going forwards if these transformations were to change. I could be wrong with this but that was the logic behind it.

In regards to the go runtime stats, this code already exposes this, see metrics below:

# HELP go_gc_duration_seconds A summary of the GC invocation durations.
# TYPE go_gc_duration_seconds summary
go_gc_duration_seconds{quantile="0"}
go_gc_duration_seconds{quantile="0.25"}
go_gc_duration_seconds{quantile="0.5"}
go_gc_duration_seconds{quantile="0.75"} 
go_gc_duration_seconds{quantile="1"} 
go_gc_duration_seconds_sum
go_gc_duration_seconds_count
# HELP go_goroutines Number of goroutines that currently exist.
# TYPE go_goroutines gauge
go_goroutines 
# HELP go_info Information about the Go environment.
# TYPE go_info gauge
go_info{version="go1.9.2"} 
# HELP go_memstats_alloc_bytes Number of bytes allocated and still in use.
# TYPE go_memstats_alloc_bytes gauge
go_memstats_alloc_bytes
# HELP go_memstats_alloc_bytes_total Total number of bytes allocated, even if freed.
# TYPE go_memstats_alloc_bytes_total counter
go_memstats_alloc_bytes_total
# HELP go_memstats_buck_hash_sys_bytes Number of bytes used by the profiling bucket hash table.
# TYPE go_memstats_buck_hash_sys_bytes gauge
go_memstats_buck_hash_sys_bytes
# HELP go_memstats_frees_total Total number of frees.
# TYPE go_memstats_frees_total counter
go_memstats_frees_total
# HELP go_memstats_gc_cpu_fraction The fraction of this program's available CPU time used by the GC since the program started.
# TYPE go_memstats_gc_cpu_fraction gauge
go_memstats_gc_cpu_fraction 
# HELP go_memstats_gc_sys_bytes Number of bytes used for garbage collection system metadata.
# TYPE go_memstats_gc_sys_bytes gauge
go_memstats_gc_sys_bytes 
# HELP go_memstats_heap_alloc_bytes Number of heap bytes allocated and still in use.
# TYPE go_memstats_heap_alloc_bytes gauge
go_memstats_heap_alloc_bytes
# HELP go_memstats_heap_idle_bytes Number of heap bytes waiting to be used.
# TYPE go_memstats_heap_idle_bytes gauge
go_memstats_heap_idle_bytes
# HELP go_memstats_heap_inuse_bytes Number of heap bytes that are in use.
# TYPE go_memstats_heap_inuse_bytes gauge
go_memstats_heap_inuse_bytes
# HELP go_memstats_heap_objects Number of allocated objects.
# TYPE go_memstats_heap_objects gauge
go_memstats_heap_objects
# HELP go_memstats_heap_released_bytes Number of heap bytes released to OS.
# TYPE go_memstats_heap_released_bytes gauge
go_memstats_heap_released_bytes
# HELP go_memstats_heap_sys_bytes Number of heap bytes obtained from system.
# TYPE go_memstats_heap_sys_bytes gauge
go_memstats_heap_sys_bytes
# HELP go_memstats_last_gc_time_seconds Number of seconds since 1970 of last garbage collection.
# TYPE go_memstats_last_gc_time_seconds gauge
go_memstats_last_gc_time_seconds
# HELP go_memstats_lookups_total Total number of pointer lookups.
# TYPE go_memstats_lookups_total counter
go_memstats_lookups_total
# HELP go_memstats_mallocs_total Total number of mallocs.
# TYPE go_memstats_mallocs_total counter
go_memstats_mallocs_total
# HELP go_memstats_mcache_inuse_bytes Number of bytes in use by mcache structures.
# TYPE go_memstats_mcache_inuse_bytes gauge
go_memstats_mcache_inuse_bytes
# HELP go_memstats_mcache_sys_bytes Number of bytes used for mcache structures obtained from system.
# TYPE go_memstats_mcache_sys_bytes gauge
go_memstats_mcache_sys_bytes
# HELP go_memstats_mspan_inuse_bytes Number of bytes in use by mspan structures.
# TYPE go_memstats_mspan_inuse_bytes gauge
go_memstats_mspan_inuse_bytes
# HELP go_memstats_mspan_sys_bytes Number of bytes used for mspan structures obtained from system.
# TYPE go_memstats_mspan_sys_bytes gauge
go_memstats_mspan_sys_bytes
# HELP go_memstats_next_gc_bytes Number of heap bytes when next garbage collection will take place.
# TYPE go_memstats_next_gc_bytes gauge
go_memstats_next_gc_bytes
# HELP go_memstats_other_sys_bytes Number of bytes used for other system allocations.
# TYPE go_memstats_other_sys_bytes gauge
go_memstats_other_sys_bytes
# HELP go_memstats_stack_inuse_bytes Number of bytes in use by the stack allocator.
# TYPE go_memstats_stack_inuse_bytes gauge
go_memstats_stack_inuse_bytes
# HELP go_memstats_stack_sys_bytes Number of bytes obtained from system for stack allocator.
# TYPE go_memstats_stack_sys_bytes gauge
go_memstats_stack_sys_bytes
# HELP go_memstats_sys_bytes Number of bytes obtained from system.
# TYPE go_memstats_sys_bytes gauge
go_memstats_sys_bytes
# HELP go_threads Number of OS threads created.
# TYPE go_threads gauge
go_threads
# HELP process_cpu_seconds_total Total user and system CPU time spent in seconds.
# TYPE process_cpu_seconds_total counter
process_cpu_seconds_total
# HELP process_max_fds Maximum number of open file descriptors.
# TYPE process_max_fds gauge
process_max_fds
# HELP process_open_fds Number of open file descriptors.
# TYPE process_open_fds gauge
process_open_fds
# HELP process_resident_memory_bytes Resident memory size in bytes.
# TYPE process_resident_memory_bytes gauge
process_resident_memory_bytes
# HELP process_start_time_seconds Start time of the process since unix epoch in seconds.
# TYPE process_start_time_seconds gauge
process_start_time_seconds
# HELP process_virtual_memory_bytes Virtual memory size in bytes.
# TYPE process_virtual_memory_bytes gauge
process_virtual_memory_bytes

These metrics get exposed by default via the prom client, hence you won't see them explicitly created in prometheus.go.

On another note, do you know why CI is failing this? 😕

ppcano and others added 26 commits December 19, 2017 13:36
When t.Count is even, t.Med should be half the sum of the two middle values in t.Values.
[Fix] Corrects calculation of t.Med in stats output
…et the metrics data as this feels more inline with the Prometheus style
@V3ckt0r V3ckt0r mentioned this pull request Jan 19, 2018
@V3ckt0r
Copy link
Author

V3ckt0r commented Jan 19, 2018

Hey @liclac, I'm going to create a separate PR that captures the collection of metrics from the core as oppose to the api. Please see #478

@mstoykov
Copy link
Contributor

Closing this in favour of #478

@mstoykov mstoykov closed this Jan 16, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

9 participants