You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on May 21, 2024. It is now read-only.
Apologies for the opening of too many issues already, unfortunately, there is another query I have regarding the method. I would be grateful for your response.
For the evaluation to work, we need to provide the "gp_hyperparameters.pkl" file. This file has precomputed test dataset gaussian process hyperparameters such as lengthscale, variance and noise variance. These are loaded in the environment as shown below:
I do not understand as to why there is a requirement of a pre-trained GP "model" (and not just hyperparameters) on test dataset beforehand for the methods MetaBO and TAF both to be evaluated on test dataset?
In order to generate these GP hyperparameters ("gp_hyperparameters.pkl" ) I use the following code, where "X" and "y" are from the "objectives.pkl", the meta-data involving hyperparameter configurations and their responses iteratively for each dataset(training and test datasets).
Can you also provide feedback as to whether this is how the "gp_hyperparameters.pkl" file was to be generated, because from my perspective this is a trained GP model now and not just hyperparameters of a GP model.
The text was updated successfully, but these errors were encountered:
In our paper we decided to switch off as many confounding factors as possible and therefore optimized the GP hyperparameters offline for all methods (instead of online during the optimization). Note, however, that it would be possible to tune the hyperparameters online also with MetaBO.
We generated the gp_hyperparameters.pkl file as you suggested. The m.optimize() command performs optimization of the GP-hyperparameters on a fitted GP model, so you get out a fitted GP model with optimized hyperparameters. You then throw away the model and store only the hyperparameters. There is no need to store the pre-trained GP model itself.
I hope this helps!
Best,
Michael
Sign up for freeto subscribe to this conversation on GitHub.
Already have an account?
Sign in.
Dear authors,
Apologies for the opening of too many issues already, unfortunately, there is another query I have regarding the method. I would be grateful for your response.
For the evaluation to work, we need to provide the "gp_hyperparameters.pkl" file. This file has precomputed test dataset gaussian process hyperparameters such as lengthscale, variance and noise variance. These are loaded in the environment as shown below:
I do not understand as to why there is a requirement of a pre-trained GP "model" (and not just hyperparameters) on test dataset beforehand for the methods MetaBO and TAF both to be evaluated on test dataset?
In order to generate these GP hyperparameters ("gp_hyperparameters.pkl" ) I use the following code, where "X" and "y" are from the "objectives.pkl", the meta-data involving hyperparameter configurations and their responses iteratively for each dataset(training and test datasets).
Can you also provide feedback as to whether this is how the "gp_hyperparameters.pkl" file was to be generated, because from my perspective this is a trained GP model now and not just hyperparameters of a GP model.
The text was updated successfully, but these errors were encountered: