You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I'm trying out a new experimental HMC branch now, and it works fine for Models where log_posterior is defined with autograd primitives. The price is that the sanity checks in the Model.log_posterior() superclass method use np.isinf() and np.isnan() which of course don't have gradients.
This was foreseeable, reminding me that proper range checking is important and that not all Models will be differentiable. The present issue is a placeholder to get me to think a bit more carefully about how autograd will interact with the class hierarchy and what constraints use of autograd imposes. A full solution will involve some kind of signposts for how Models and Proposals interact when derivatives are involved.
The text was updated successfully, but these errors were encountered:
I'm trying out a new experimental HMC branch now, and it works fine for Models where log_posterior is defined with autograd primitives. The price is that the sanity checks in the Model.log_posterior() superclass method use np.isinf() and np.isnan() which of course don't have gradients.
This was foreseeable, reminding me that proper range checking is important and that not all Models will be differentiable. The present issue is a placeholder to get me to think a bit more carefully about how autograd will interact with the class hierarchy and what constraints use of autograd imposes. A full solution will involve some kind of signposts for how Models and Proposals interact when derivatives are involved.
The text was updated successfully, but these errors were encountered: