Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Training time #18

Open
SxJyJay opened this issue Dec 5, 2022 · 4 comments
Open

Training time #18

SxJyJay opened this issue Dec 5, 2022 · 4 comments

Comments

@SxJyJay
Copy link

SxJyJay commented Dec 5, 2022

Hi,
Thanks for your excellent work! I want to know the detailed training configurations (e.g., batch size, GPU num and training time) of your pretraining stage. I have pre-trained on the nuScenes dataset for 9 hours with 8 RTX3090 GPUs of batch size 64, but haven't finished the very first epoch. And I am wondering whether it is a normal phenomenon.

@chaytonmin
Copy link
Owner

Hi, Thanks for your excellent work! I want to know the detailed training configurations (e.g., batch size, GPU num and training time) of your pretraining stage. I have pre-trained on the nuScenes dataset for 9 hours with 8 RTX3090 GPUs of batch size 64, but haven't finished the very first epoch. And I am wondering whether it is a normal phenomenon.

It is a normal phenomenon as nuScenes dataset is very large.

@FrontierBreaker
Copy link

Could you please provide the exact batch size, GPU num and training time on nuScenes dataset? Thanks @chaytonmin @IrohXu

@FrontierBreaker
Copy link

Hi, Thanks for your excellent work! I want to know the detailed training configurations (e.g., batch size, GPU num and training time) of your pretraining stage. I have pre-trained on the nuScenes dataset for 9 hours with 8 RTX3090 GPUs of batch size 64, but haven't finished the very first epoch. And I am wondering whether it is a normal phenomenon.

Hi, have you succeeded in reproducing the result?

@SxJyJay
Copy link
Author

SxJyJay commented Feb 16, 2023

Hi, Thanks for your excellent work! I want to know the detailed training configurations (e.g., batch size, GPU num and training time) of your pretraining stage. I have pre-trained on the nuScenes dataset for 9 hours with 8 RTX3090 GPUs of batch size 64, but haven't finished the very first epoch. And I am wondering whether it is a normal phenomenon.

Hi, have you succeeded in reproducing the result?

No, it is too time-consuming to pre-train it with my available GPUs. I think pre-train with 8*80G A100 GPUs is a more rational choice for me in the future.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants