-
Notifications
You must be signed in to change notification settings - Fork 874
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Include validation series with hyperparameter optimization in Darts #2301
Comments
Hi @ETTAN93, yes for this you can simply define a For the final test set, adjust the |
Hi @dennisbader, just to clarify but what you mean: Assuming I have a dataset that goes from 2020-01-01 to 2023-12-31, are you saying to split the dataset into for example:
Then within the objective function for hyperparameter optimization, you would set:
After getting the hyperparameters, you would then evaluate on the test set again with:
Is that correct? |
Hi @ETTAN93, yes, that's exactly it 👍 (assuming that your frequency is "D"/daily) |
Hi @dennisbader, This seems as the models hyper parameters are being tuned in the interval from 2022-01-01 until 2022-12-31, then used for all the forecasts made from 2023-01-01 and forward. However, what if you wanted to do hyper parameter optimization every month in an expanding- or sliding window cross validation instead? How would you structure it using Darts? |
Hi @noori11, This is usually the way to go; you train the model with as much data as possible "before" the validation set, use the validation set to identify the best parameters and then, just assess the performance on the test set. If by "hyper-parameter optimization every month" you mean generating/assess performance only once per month, you would have to re-use the trick described in #2497 to have forecasts at the desired frequency before computing the metrics. But I would highly recommend using the code snippet mentioned above in this thread. Closing this since the initial question was answered, feel free to reopen this issue or open a new one if something is still not clear. |
When tuning hyperparameters for non time-series data, normally one would split the dataset into training set, validation set and test set. The validation set is then used to test which set of hyperparameters perform the best.
How does this work for historical backtest in time-series forecasting? I referred to the two examples here in Darts: example1 and example2 here in Darts.
For example, when just doing a normal historical backtest, assuming I have hourly data from 2020-01-01 to 2023-12-31. I would just specify when the test set starts, e.g. 2023-01-01 and carry out the historical backtest that way, e.g.
This means that the model is retrained every 30 days with the past 90 days of data. It predicts the next 24 hours every 24 hours.
If I want to do now hyperparameter optimization with optuna and Darts, would this make sense:
but this then uses the full set of data to do the hyperparameter optimization. do I need to split the data out separately for the test set?
The text was updated successfully, but these errors were encountered: