Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

after-the-fact alerting runs, in case alerting system was down #13

Open
woodsaj opened this issue Apr 4, 2016 · 1 comment
Open

after-the-fact alerting runs, in case alerting system was down #13

woodsaj opened this issue Apr 4, 2016 · 1 comment

Comments

@woodsaj
Copy link
Contributor

woodsaj commented Apr 4, 2016

Issue by Dieterbe
Wednesday Jul 22, 2015 at 10:33 GMT
Originally opened as raintank/grafana#356


this grew out of #291, basically please read from raintank/grafana#291 (comment) onwards to participate in this ticket.
I think we should support backfilling old alerting jobs, but not neccesarily send critical notifications.
also this is not high prio.

@woodsaj
Copy link
Contributor Author

woodsaj commented Aug 11, 2016

I do not think this should every be implemented.
I think the disagreement is based on interpretation of what the alert outcome metrics represent.

To me they represent the outcome of the check. So if a check doesnt run then there rightfully should be gaps in the metrics. When i look at the metrics I think "at this point in time, this is what the alerting system thought the state of the endpoint was".

However if you treat the alert outcome metrics as the state of the endpoint being monitored, then it does make sense to backfill data so that you can accurately calculate percent Uptime.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

1 participant