Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

About multi-GPU training learning rate #78

Open
RayYoh opened this issue Sep 20, 2024 · 1 comment
Open

About multi-GPU training learning rate #78

RayYoh opened this issue Sep 20, 2024 · 1 comment

Comments

@RayYoh
Copy link

RayYoh commented Sep 20, 2024

Hi @filaPro, recently, I have been trying to reproduce the results of Oneformer3D to compare with my own method. Since using just one GPU would be relatively slow, I changed to using multi-GPU training to align the configs with my method.
My question is if I use 3 or 4 GPUs for training, which means the batch size will be 34 or 44, do I need to modify the learning rate in the raw configs (i.e. 310-4 or 410-4).

Best

@filaPro
Copy link
Owner

filaPro commented Sep 20, 2024

We never tried multi-gpu for oneforner3d, so probably yes, multiplying should be fine, but i'm not sure if after it the metrics will be reproduced exactly.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants