-
Notifications
You must be signed in to change notification settings - Fork 1.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add metrics registry #2071
Add metrics registry #2071
Conversation
cd896c7
to
608677a
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The main thing I want to know from this PR is are we fine with how we need to now thread builintMetrics through the code. I can revert the registry changes if we don't think we want anything like that or if we want a more full PR, but the builtinMetrics threading took a lot of time and fixes and will need at least a bit more test fixes which will make it span more or less the whole codebase, so I prefer if I do it early and as a single PR.
cmd/run.go
Outdated
registry := stats.NewRegistry(engine.Samples) | ||
builtInMetrics := metrics.RegisterBuiltinMetrics(registry) | ||
globalCtx = stats.WithRegistry(globalCtx, registry) | ||
globalCtx = metrics.WithBuiltinMetrics(globalCtx, builtInMetrics) | ||
|
||
lingerCtx, lingerCancel := context.WithCancel(globalCtx) | ||
defer lingerCancel() | ||
runCtx, runCancel := context.WithCancel(lingerCtx) | ||
defer runCancel() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This hacks were needed as I need to have the builtinMetrics in both setup
, teardown
, handleSummary
and all the outputs, so it needs to be as early as possible, but I also need the engine's Samples as I didn't want to move that as well
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am a bit leery of how many things we're sticking in these contexts... What is the problem with having the registry
and builtInMetrics
be passed normally to the execScheduler
and from it to the VUs, and then being accessible through the State
that already lives in the context?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
for starters ... lib.State is not around in the init context
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fair enough, but you have this: https://github.com/k6io/k6/blob/7da10ac81128940e0828d9aca2e955f4e046c742/js/common/initenv.go#L31-L33
It's actually even better, since that way you can ensure that metrics can be registered only in the init env 🤷♂️
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'd also prefer it if we can avoid using the context for this, though not sure if it would be too messy to thread the registry everywhere it's needed.
Also, nitpick, but I'd move the comment about runCtx
and lingerCtx
from line 147 to here, as it feels detached as it is.
cmd/run.go
Outdated
// that can accept them. And if some output fails, stop the already started | ||
// ones. This may take some time, since some outputs make initial network | ||
// requests to set up whatever remote services are going to listen to them. | ||
func StartOutputs( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I literally otherwise needed to thread too much stuff through the engine, and decided that moving it out will be much easier.
THis probably should be method on some object having outputs
and logger
as fields ... but I didn't want to make additional changes (this will be a theme in this comments and the whole WIP PR :P )
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
js/runner.go
Outdated
// TODO move this to a function or something | ||
builtinMetrics := metrics.GetBuiltInMetrics(ctx) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This also was the easier way then to just keep adding to the Trail's "constuctor"
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As I mentioned in another comment, it should be relatively easy to thread the metric registry and the built-in metrics struct to the VU State
, to which you already have direct access here.
stats/registry.go
Outdated
// PushContainer emits a contained through the registry's sink | ||
// TODO drop this somehow | ||
func (r *Registry) PushContainer(ctx context.Context, container SampleContainer) { | ||
PushIfNotDone(ctx, r.sink, container) | ||
} | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
A lot of the actual design problems come from the fact we have containers and even more because of custom containers such as HTTPTrail.
I don't think that just removing the later is feasible at all without so much rework that it's totally a different PR, even if we wanted to, so some workaround will need to be found.
The predominant problem is that while I can make certain that the only way the type Sample can be emitted is through the registry, by just not having a different way to emit a metric through the registry but through the metric it generates (see Sample's Push method which, probably should be an Emit on the Metric 🤔 ), this more or less doesn't work with SampleContainers and especially with HTTPTrail as that one is specifically useful in the cloud output, where instead of looking at it's Samples we just look at it ... properly typed fields.
I do think we should postpone fixing that for a later PR,hopefully still in the v0.33.0 cycle. I have hopes that if we introduce separate Metric/Sample types for Gauge, Counter, Rate and Trend that will make some this problems go away 🤷
lib/executor/ramping_arrival_rate.go
Outdated
Value: 1, Metric: metrics.DroppedIterations, | ||
Tags: metricTags, Time: time.Now(), | ||
}) | ||
droppedIterationMetric.Emit(parentCtx, time.Now(), metricTags, 1) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This seems a lot better to me than the above or the Sample.Push I show in the constant-arrival-rate
https://github.com/k6io/k6/pull/2071/files#diff-2b3a85b9dccbd5050f5755479947c5237340bf7335dba7f83d654c55548df710R349-R352
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
cc @na--
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, I saw this, and it's slightly better than Sample.Push()
, but I still think we're not separating concerns properly
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In general, I like the metrics registry and the BuiltInMetrics struct
approach, but I am not a huge fan of how they are passed around via context (I don't think we need it, we can simply pass them to where they are required) and I don't like the self-propelled metric samples.
lib/metrics/metrics.go
Outdated
"go.k6.io/k6/stats" | ||
) | ||
|
||
// TODO: refactor this, using non thread-safe global variables seems like a bad idea for various reasons... | ||
const ( | ||
VUsName = "vus" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't like the fact that we have constants for the metric names - we shouldn't need to have that. The only place the metric name should need to be used is below, in RegisterBuiltinMetrics()
. Anywhere else, if you need the name of a metric, you should be able to access it via builtInMetrics.SpecificMetric.Name
)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This for example means that the cloud output will have to have access to the registry in order to make half the changes that are currently there which made .... a lot more code changes that were only needed for this specific case so I do prefer the constants to that.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This for example means that the cloud output will have to have access to the registry
Why? Wouldn't it just need access to the BuiltInMetrics
, which it can easily get through https://github.com/k6io/k6/blob/7da10ac81128940e0828d9aca2e955f4e046c742/output/types.go#L100-L104
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I thought I've dropped this...
Yes I started down this route and turn out to be ... not easily. Don't get me wrong it worked up until the point I gave up, which was because I needed to now change a bunch of functions to method just to give them the builtinMetrics a bit easier in order for them to get just the names of the builtinMetrics ... even more badly, not to match them but to reemit them as separate metrics
I do remember there were other places where it's very cumbersome for ... IMO no good reason to not have the names of the metrics as constants
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
no good reason to not have the names of the metrics as constants
Not sure if it's a good reason, but my reason is that, when we have this metrics registry, there shouldn't really be a need to compare metric names. And the fact that there is such a need is a bit of a code smell, something we probably need to refactor further in the future, potentially in another PR.
When every metric has a single object, you should be able to just compare these object pointers to each other, or use them as map indexes, and that would be enough (and more efficient than using strings). IMO metric names are an implementation detail, the majority of the k6 code shouldn't know them or care about them. The cloud output is a bit different, for now, because it repackages the samples in http_req_li_all
and http_iter_li_all
, but it should still be easily implementable by passing it the BuiltInMetrics
.
What are the other places where metric names are used? As far as I can see, they are only required in the cloud output:
(git)-[metricsRegistry] % rg 'metrics\.[\w]+Name'
output/cloud/output_test.go
232: Metric: metrics.VUsName,
650: Metric: metrics.HTTPReqDurationName,
output/cloud/output.go
380: metrics.DataSentName: float64(sc.BytesWritten),
381: metrics.DataReceivedName: float64(sc.BytesRead),
385: values[metrics.IterationDurationName] = stats.D(sc.EndTime.Sub(sc.StartTime))
386: values[metrics.IterationsName] = 1
output/cloud/data.go
119: values[metrics.HTTPReqsName] = 1
120: values[metrics.HTTPReqDurationName] = stats.D(trail.Duration)
121: values[metrics.HTTPReqBlockedName] = stats.D(trail.Blocked)
122: values[metrics.HTTPReqConnectingName] = stats.D(trail.Connecting)
123: values[metrics.HTTPReqTLSHandshakingName] = stats.D(trail.TLSHandshaking)
124: values[metrics.HTTPReqSendingName] = stats.D(trail.Sending)
125: values[metrics.HTTPReqWaitingName] = stats.D(trail.Waiting)
126: values[metrics.HTTPReqReceivingName] = stats.D(trail.Receiving)
128: values[metrics.HTTPReqFailedName] = stats.B(trail.Failed.Bool)
output/cloud/data_test.go
53: Metric: metrics.VUsName,
71: metrics.DataSentName: 1234.5,
72: metrics.DataReceivedName: 6789.1,
73: metrics.IterationDurationName: stats.D(10 * time.Second),
cmd/run.go
Outdated
registry := stats.NewRegistry(engine.Samples) | ||
builtInMetrics := metrics.RegisterBuiltinMetrics(registry) | ||
globalCtx = stats.WithRegistry(globalCtx, registry) | ||
globalCtx = metrics.WithBuiltinMetrics(globalCtx, builtInMetrics) | ||
|
||
lingerCtx, lingerCancel := context.WithCancel(globalCtx) | ||
defer lingerCancel() | ||
runCtx, runCancel := context.WithCancel(lingerCtx) | ||
defer runCancel() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am a bit leery of how many things we're sticking in these contexts... What is the problem with having the registry
and builtInMetrics
be passed normally to the execScheduler
and from it to the VUs, and then being accessible through the State
that already lives in the context?
core/engine.go
Outdated
@@ -147,6 +148,10 @@ func (e *Engine) StartOutputs() error { | |||
}) | |||
} | |||
|
|||
if builtinMetricOut, ok := out.(output.WithBuiltinMetrics); ok { | |||
builtinMetricOut.SetBuiltinMetrics(e.builtInMetrics) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This seems unnecessary - you already do it in StartOutputs()
in cmd/run.go
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
oops this whole function should be dropped ;)
js/runner.go
Outdated
// TODO move this to a function or something | ||
builtinMetrics := metrics.GetBuiltInMetrics(ctx) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As I mentioned in another comment, it should be relatively easy to thread the metric registry and the built-in metrics struct to the VU State
, to which you already have direct access here.
stats/registry.go
Outdated
// TODO maybe rename the whole package to metrics so we have `metrics.Registry` ? | ||
type Registry struct { | ||
metrics map[string]*Metric | ||
sink chan<- SampleContainer |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think the registry shouldn't have this sink
. It's not a clear separation of concerns - the registry should be concerned with registering metrics and potentially time series later on. The actual processing and pushing of metric samples around should be handled outside of it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How do you prevent some code to just make a new registry and register a metric in it and then emit it?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hmm, I see your point, but at the same time, I am not sure what you describe is actually a problem we should guard against 😅 A simple metric registry, even when it doesn't have a sink, would be enough to guard against scripts that register conflicting custom metrics from JS, either between themselves or with built-in ones, which is what we want. We control the k6/metrics
API and it will only ever use the single registry we initialize.
I'd say that guarding against malicious xk6 extensions sending garbage is out of scope for this PR. They can run arbitrary Go code and have a 1001 other ways of messing everything, including way more important things than metrics. Even in this PR, they can just send whatever they want directly to the output, or over https://github.com/k6io/k6/blob/7da10ac81128940e0828d9aca2e955f4e046c742/lib/state.go#L64
stats/stats.go
Outdated
@@ -334,6 +334,10 @@ type Sample struct { | |||
Value float64 | |||
} | |||
|
|||
func (s Sample) Push(ctx context.Context) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This also seems bad - too much mixing of concerns. I'd rather not have self-propelled metric samples 😅 They should be simple data points and something else should push them around.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Don't have much to add besides what @na-- mentioned:
- Avoid embedding in the context if possible.
Sample.Push()
andRegistry.PushContainer()
feel slightly out of place. The registry should handle metric registration only and metric pushing should be done separately IMO. The reverseMetric.r
*Registry
reference also seems unnecessary.- Having another method for outputs to implement (
SetBuiltinMetrics
) seems unnecessary since most outputs shouldn't care about this, and if needed they should be able to retrieve it with e.g.metrics.GetBuiltInMetrics()
(without a context). But I think you're removing this, so 👍. - Renaming the
stats
package tometrics
to havemetrics.Registry
also SGTM.
lib/executor/per_vu_iterations.go
Outdated
@@ -212,6 +212,7 @@ func (pvi PerVUIterations) Run(parentCtx context.Context, out chan<- stats.Sampl | |||
activeVUs.Done() | |||
} | |||
|
|||
droppedItearationMetric := metrics.GetBuiltInMetrics(parentCtx).DroppedIterations |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
droppedItearationMetric := metrics.GetBuiltInMetrics(parentCtx).DroppedIterations | |
droppedIterationMetric := metrics.GetBuiltInMetrics(parentCtx).DroppedIterations |
Same below.
cmd/run.go
Outdated
registry := stats.NewRegistry(engine.Samples) | ||
builtInMetrics := metrics.RegisterBuiltinMetrics(registry) | ||
globalCtx = stats.WithRegistry(globalCtx, registry) | ||
globalCtx = metrics.WithBuiltinMetrics(globalCtx, builtInMetrics) | ||
|
||
lingerCtx, lingerCancel := context.WithCancel(globalCtx) | ||
defer lingerCancel() | ||
runCtx, runCancel := context.WithCancel(lingerCtx) | ||
defer runCancel() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'd also prefer it if we can avoid using the context for this, though not sure if it would be too messy to thread the registry everywhere it's needed.
Also, nitpick, but I'd move the comment about runCtx
and lingerCtx
from line 147 to here, as it feels detached as it is.
7da10ac
to
b5a25a8
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I really prefer to keep the metric names as constants, I am pretty sure I have a builtinMetrics in the vicinity for most of them(especially in the tests) but I still feel that having a constant for what is effectively a constant is probably much better than not having one.
I can now move Registry
to metrics
but renaming stats
to metrics will be an even bigger change IMO that won't really help anything IMO.
For the record this changes can and probably will be done in following PRs, after all we will need to at least:
- drop js support for thresholds which I would expect will have other side-effects
- try to make tags more ... performant which likely will be an even bigger change that will require a lot of moving of stuff around that I could not possibly anticipate currently
- I do want to have GaugeMetric and TrendMetric and possibly GaugeSample and TrendSample and so on, all of which will probably as the above require a lot of changes possibly moving them between packages and adding interfaces and such.
- emitting only certain metrics and sub metrics will likely require big refactors as well
which all means that doing more renames will likely just mean we will need to rename them again in the future ;).
edit: (as I posted prematurely)
I will be squashing all of this possibly, but I am interested in what you think about the new Metric.Emit method
cmd/run.go
Outdated
// that can accept them. And if some output fails, stop the already started | ||
// ones. This may take some time, since some outputs make initial network | ||
// requests to set up whatever remote services are going to listen to them. | ||
func StartOutputs( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I wonder if I am not particularly fond of this idea myself as I was wondering if builtinMetrics should not be populated by their respective modules although that will likely have problems with script that first register a metric with the name |
0700ac8
to
f8a815b
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM overall, just some questions and suggestions.
I'm wondering if we can avoid having to initialize Registry
and BuiltinMetrics
everywhere and just use the instance from InitEnvironment
. In that case it would make sense to embed the builtin metrics on Registry
, and we can access it from where it's needed with common.GetInitEnv()
. It's still relying on Context
, but this is one of the few valid use cases IMO.
I would also try to get rid of the BuiltinMetrics
struct and access them from Registry.metrics
with Registry.GetMetric(name string)
, where name
is one of the constants defined in metrics.go
. But this might become unwieldy for places like Trail.SaveSamples
, so maybe not.
InitEnvironment should only be available in the init scope/context and we emit metrics (what we need builtinMetrics) always in places outside of that so ... we will need to give builtinmetrics access through something else in all cases. Calling a function to get the metric we want to emit on each emition even if that is just map access is likely going to be way too slow ... especially for something like emitting metrics. |
b8b8f44
to
b232715
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Mostly LGTM beyond some minor nitpicks I mentioned inline. Also, please fix your PR description, the current one looks like it was a leftover from the WIP days of the pull request 😅
logger *logrus.Logger, src *loader.SourceData, typ string, filesystems map[string]afero.Fs, rtOpts lib.RuntimeOptions, | ||
builtinMetrics *metrics.BuiltinMetrics, registry *metrics.Registry, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is getting a bit crowded 😅 Not saying we should change it now, just making a note. IIRC we wanted to refactor and rename it anyway, so it's not a big deal, we should just do that sooner.
func (e *ExecutionScheduler) Run(globalCtx, runCtx context.Context, engineOut chan<- stats.SampleContainer) error { | ||
//nolint:cyclop | ||
func (e *ExecutionScheduler) Run( | ||
globalCtx, runCtx context.Context, engineOut chan<- stats.SampleContainer, builtinMetrics *metrics.BuiltinMetrics, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why do we pass the BuiltinMetrics
here, instead of in the ExecutionScheduler
constructor? 😕 Wouldn't it be clearer to pass it to the NewExecutor()
constructor, e.g. through the ExecutionState
?
Line 106 in 0628db0
NewExecutor(*ExecutionState, *logrus.Entry) (Executor, error) |
values[metrics.HTTPReqSending.Name] = stats.D(trail.Sending) | ||
values[metrics.HTTPReqWaiting.Name] = stats.D(trail.Waiting) | ||
values[metrics.HTTPReqReceiving.Name] = stats.D(trail.Receiving) | ||
values[metrics.HTTPReqsName] = 1 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You can have access to the built-in metrics here (through WithBuiltinMetrics
), right? So why not use the Name
properties from them and avoid the constants?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Because unfortunately in some places I don't have a context in this tests, so after I tried for an hour to add it everywhere and failed I decided there is no point int making it harder for myself. There are way too many places where using builtingMetrics in the tests is way too hard for the (IMO) no benefit of not having the names as consts that I prefer to skip adding a bunch of lines. Like right here this is a function it doesn't have access to anything and it's tests don't have access to anything and before I stopped adding stuff I was at 100 lines+ of changes just to change that and have working tests.
Again I am pretty sure a lot of this code will be changes again and again while we do the remaining metric changes, so making it "better" in practically any way that adds more code or is harder seems like wasted time for me
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hmm, yeah, I can see how it will be tricky in some of the standalone tests. That said, I'd probably prefer if we use naked strings in these tests instead, since we're only verifying that some metric exists or something like that, and we'll probably never change these values... 🤷♂️
Having the names as exported consts or not doesn't functionally matter, but the current way is more misleading than helpful IMO. It makes it seem OK to use the constants in normal non-test code, like here, when an explicit dependency through WithBuiltinMetrics
will be easy and much cleaner. Still, as I mentioned, this is a relatively minor nitpick, and we can easily fix it later if we decide to, so I'm not very insistent on changing it now.
metrics.DataSent.Name: 1234.5, | ||
metrics.DataReceived.Name: 6789.1, | ||
metrics.IterationDuration.Name: stats.D(10 * time.Second), | ||
metrics.DataSentName: 1234.5, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Same here, you literally have the builtinMetrics
struct in the test above
DroppedIterations = stats.New("dropped_iterations", stats.Counter) | ||
Errors = stats.New("errors", stats.Counter) | ||
const ( | ||
VUsName = "vus" //nolint:golint |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think I've mentioned this before, but I don't see a good reason for these to be public, or even consts. They can be simple strings in RegisterBuiltinMetrics()
, because they shouldn't have to be directly referenced anywhere... They should only be used once, in the definition, and never needed as strings anywhere else again.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM. Not particularly fond of all the passing of *BuiltinMetrics
and *Registry
everywhere, and all the references kept around, but everything else looks clean 👍
@@ -346,7 +349,7 @@ func (car ConstantArrivalRate) Run(parentCtx context.Context, out chan<- stats.S | |||
// dropped - we aren't going to try to recover it, but | |||
|
|||
stats.PushIfNotDone(parentCtx, out, stats.Sample{ | |||
Value: 1, Metric: metrics.DroppedIterations, | |||
Value: 1, Metric: droppedIterationMetric, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why declare a variable? It's only used here, so just access builtinMetrics.DroppedIterations
directly? Same for the other executors.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I remember I did it so it's faster, but I don't remember if it was previously a function call or did I decide the dereference will be slow enough 🤔 I can revert it, but I don't think it matters much as again we are likely going to redo every call to PushIfNotDone
at some point during this rewrite so I prefer not to rebase again just for this :(
@@ -0,0 +1,53 @@ | |||
package metrics |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We really should add a linter for this 😄 Maybe go-header works now?
package metrics | |
/* | |
* | |
* k6 - a next-generation load testing tool | |
* Copyright (C) 2021 Load Impact | |
* | |
* This program is free software: you can redistribute it and/or modify | |
* it under the terms of the GNU Affero General Public License as | |
* published by the Free Software Foundation, either version 3 of the | |
* License, or (at your option) any later version. | |
* | |
* This program is distributed in the hope that it will be useful, | |
* but WITHOUT ANY WARRANTY; without even the implied warranty of | |
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the | |
* GNU Affero General Public License for more details. | |
* | |
* You should have received a copy of the GNU Affero General Public License | |
* along with this program. If not, see <http://www.gnu.org/licenses/>. | |
* | |
*/ | |
package metrics |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
uff
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
unfortunately it seems like there has been no improvement on the areas that were previously problematic from a quick look :(
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I found other files with it missing so I prefer to close it in another PR with all of them
Add a basic metric registry and define metrics as builtin for the registry instead of having globals.
This also fixes problems around custom metrics "redefining" metric with the same name but with different type.
fixes #1832