Error estimation in model.expect_data()
#1509
-
Dear developers, I wondered if there were a way to return the uncertainties through pyhf when calling Ideally, I would like to understand is whether there exists a method I have attempted to do this by hand, but I am quite unsure on how to deal with the less trivial modifiers, such as Any help would be much appreciated. Thanks! Blaise |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment 1 reply
-
Hi, there are two ways of doing this: bootstrapping and error propagation. For bootstrapping, see #1187 (comment) and #1359 (comment) for an example. You can sample from a multivariate Gaussian and evaluate the model prediction for all samples. In principle you could sample from the full likelihood directly to avoid the Gaussian approximation as well. Error propagation can for example be done via |
Beta Was this translation helpful? Give feedback.
Hi, there are two ways of doing this: bootstrapping and error propagation.
pyhf
itself does not include a method for this at the moment.For bootstrapping, see #1187 (comment) and #1359 (comment) for an example. You can sample from a multivariate Gaussian and evaluate the model prediction for all samples. In principle you could sample from the full likelihood directly to avoid the Gaussian approximation as well.
Error propagation can for example be done via
iminuit.util.propagate
as described in this tutorial. Another implementation is provided via thecabinetry
library incabinetry.model_utils.calculate_stdev
. See this example notebook showing the use ofcabinetry
, and in particular thec…