-
Notifications
You must be signed in to change notification settings - Fork 30
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
PyTorchLightningPruningCallback messes with Multiworker Dataloaders #154
Labels
bug
Something isn't working
Comments
@mspils Does this problem still occur with the latest Optuna v3.4? |
Yes and no. It crashes, which is probably an improvement:
|
Same issue here. |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Expected behavior
When using the PyTorchLightningPruningCallback a pruned trial should resolve without errors.
Environment
Error messages, stack traces, or logs
Steps to reproduce
Additional context (optional)
When optimizing a study with optuna, using the PyTorchLightningPruningCallback it is possible for pruned trials to not finish properly.
DataLoaders with multiple workers are not killed properly and possibly even interfere with later trials. At least the logged v_nums are out of order sometimes.
The text was updated successfully, but these errors were encountered: