You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Is there an easy way to get the mean and variance at the training points? Is it already available in gp after calling fit! (thus saving compute)? or do I need to use predict_y to compute it?
Thanks!
The text was updated successfully, but these errors were encountered:
No, this isn't implemented, and I agree that it should be, as recomputing the covariance matrix is wasteful. If you've already implemented this, I would be happy to integrate it into the package.
As part of a broader Value Function Iteration exercise, I wish to feed the output of predictMVN or predict_f back into an instantiation of GP .
Eg. Suppose we have created an instance gp by feeding GP the usual ingredients: training data x1 , target data t1 , along with eg MeanPoly(B) and a suitable kernel kern .
After optimising or fitting, how would I extract the posterior mean and covariance functions from gp and, moreover, extract them as type Mean and Kernel respectively?
@maximerischard, from what you're saying, I should currently be able to recompute. Can you explain how one does that? I might then attempt to fix this issue.
Is there an easy way to get the mean and variance at the training points? Is it already available in
gp
after callingfit!
(thus saving compute)? or do I need to usepredict_y
to compute it?Thanks!
The text was updated successfully, but these errors were encountered: