You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I have a question regarding implementing a specific setup using Bayesian GPLVM framework.
Setup:
I want to estimate latent X such that
Y = f(X) + error
where Y is NxD (multi-output) and X is latent. I have a prior on X such that X* ~ N(X, s). Assuming s = 0.1, I would like to use the GPLVM framework to recover posterior latent X. Since the setup is a part of a simulation study, I have true X to compute rmse for recovery.
As it is apparent, I am setting custom priors for the covariance function hyperparameters as well as error variance.
When I am using this setup, the rmse is absurdly high, which makes me think that I am making a mistake somewhere. It will be helpful to know in case someone has already tried out a similar problem scenario, or if I am making an obvious mistake.
Thanks!
The text was updated successfully, but these errors were encountered:
Hi @Soham6298
I will take a look at this.
When testing (with a simple example), I did not experience large losses. Is it possible that you share a bit more of your code? At least in a way that, at best, I can work on similar data as you so comparability is ensured.
Since I am running some extensive simulation studies that are part of a more elaborate model comparison, I have set up a small notebook that tests the GPy on 50 simulated datasets. These datasets were generated from an exact squared exponential GP with the true parameters (length scale, marginal variance and error variance) sampled from the same distribution as the priors in the GPy model.
I compute the posterior RMSE for the latent input from the GPy model with the true X from my simulated data. I also have a naive RMSE which is basically RMSE when using the prior to the latent inputs compared to ground truth. I get the following result:
thanks for uploading an example!
I have looked at it but I'm afraid I need some more time until I can give a proper answer on this. Sorry! I hope this is not something urgent.
Of course! Let me know in case you might need additional inputs from my side. Just to mention, I had also used Pyro GPLVM on the same datasets and the results for the pyro model are beyond the naive RMSE that I present in the above example.
I have a question regarding implementing a specific setup using Bayesian GPLVM framework.
Setup:
I want to estimate latent X such that
Y = f(X) + error
where Y is NxD (multi-output) and X is latent. I have a prior on X such that X* ~ N(X, s). Assuming s = 0.1, I would like to use the GPLVM framework to recover posterior latent X. Since the setup is a part of a simulation study, I have true X to compute rmse for recovery.
To that end, I am using:
As it is apparent, I am setting custom priors for the covariance function hyperparameters as well as error variance.
When I am using this setup, the rmse is absurdly high, which makes me think that I am making a mistake somewhere. It will be helpful to know in case someone has already tried out a similar problem scenario, or if I am making an obvious mistake.
Thanks!
The text was updated successfully, but these errors were encountered: