-
Notifications
You must be signed in to change notification settings - Fork 18
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Training time #18
Comments
It is a normal phenomenon as nuScenes dataset is very large. |
Could you please provide the exact batch size, GPU num and training time on nuScenes dataset? Thanks @chaytonmin @IrohXu |
Hi, have you succeeded in reproducing the result? |
No, it is too time-consuming to pre-train it with my available GPUs. I think pre-train with 8*80G A100 GPUs is a more rational choice for me in the future. |
Hi,
Thanks for your excellent work! I want to know the detailed training configurations (e.g., batch size, GPU num and training time) of your pretraining stage. I have pre-trained on the nuScenes dataset for 9 hours with 8 RTX3090 GPUs of batch size 64, but haven't finished the very first epoch. And I am wondering whether it is a normal phenomenon.
The text was updated successfully, but these errors were encountered: