We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Hello, may I ask, I have 4 3090s with 24G memory. Can I train this model?
The text was updated successfully, but these errors were encountered:
It shouldn't work, the author is doing it on 4 A100s, and I can't drive it with 1 A6000 now.
Sorry, something went wrong.
I tried the demo on RTX 3090, and it looks trainable.
Another problem was that I could train on one GPU, but the validation for multi-card training would report errors.
I switched to a new 2*RTX 3090 server, which works at full power compared to the previous one. Now, I can train and verify using two GPUs.
No branches or pull requests
Hello, may I ask, I have 4 3090s with 24G memory. Can I train this model?
The text was updated successfully, but these errors were encountered: