Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Mean and variance at the training points #84

Open
rcnlee opened this issue Jul 7, 2018 · 2 comments
Open

Mean and variance at the training points #84

rcnlee opened this issue Jul 7, 2018 · 2 comments

Comments

@rcnlee
Copy link

rcnlee commented Jul 7, 2018

Is there an easy way to get the mean and variance at the training points? Is it already available in gp after calling fit! (thus saving compute)? or do I need to use predict_y to compute it?

Thanks!

@maximerischard
Copy link
Contributor

maximerischard commented Jul 27, 2018

No, this isn't implemented, and I agree that it should be, as recomputing the covariance matrix is wasteful. If you've already implemented this, I would be happy to integrate it into the package.

@patrickocal
Copy link

patrickocal commented Oct 30, 2021

I posted this question on Julia discourse:

As part of a broader Value Function Iteration exercise, I wish to feed the output of predictMVN or predict_f back into an instantiation of GP .
Eg. Suppose we have created an instance gp by feeding GP the usual ingredients: training data x1 , target data t1 , along with eg MeanPoly(B) and a suitable kernel kern .
After optimising or fitting, how would I extract the posterior mean and covariance functions from gp and, moreover, extract them as type Mean and Kernel respectively?

@maximerischard, from what you're saying, I should currently be able to recompute. Can you explain how one does that? I might then attempt to fix this issue.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants