Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
This is a draft for an idea to generalise the metrics API, before I commit to making it clean and documenting it and such, I would like the maintainers' opinion on the general principle.
The idea is to provide what I've been calling “virtual metrics” (for lack of a better name, though I think it's not bad a name). The idea is to be able to create metrics which are not backed by their usual
IORef
but can be sampled from a preexisting store (or store-like thing). Existing examples of such metrics would be the GHC runtime metrics, or the proc metrics, though I didn't change their implementation (I don't think I should plan to either). A situation that I've had and motivated this PR was that I wanted to return a measure that was computed by a database query.With the current API, I have basically two options: either I define the metric myself using the concrete
Metric
type, which is a tad low-level and error prone; or I make a standard metric, and some background thread periodically forwards my metric to theIORef
, with all the baggage that it entails.With the virtual metric API (this draft only implement gauge, since it's the one that makes the more sense, but other metrics may very well benefit from the same treatment) we make a metric simply by providing the IO action that will measure the value when the metric is sampled.
A key ingredient which makes this idea actually useful is that it is easy to define the regular metrics as virtual metrics. To that effect I extended the
Metric
API a little withmetricIO :: IO (Metric a) -> Metric a
(I don't like the name though)Functor
instance, which lets me, for instance, replaceVirtualGauge
by the appropriageGauge
where creating a new standard gauge.Applicative
instance, which I don't actually need in this particular PR, but we might as well: it lets us group metrics under a singleMetric Foo
, like is done in the GHC runtime metrics and the proc metrics. It was very cheap to write, and since I was making theMetric
API more composable, I thought I might as well. It's part of making it easier to define ones own custom metric packages on top ofprometheus-client
.Does this look like a reasonable plan? Any recommendations? requests?