We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
I have rtx3090 * 1, rtx 3060 16G * 2, total mem is 24+16 * 2=56G In this case, is it possible to finetune models?
The text was updated successfully, but these errors were encountered:
If they are connected to the same server, yes! but I think you won't be able to use more than 16G in each one.
Sorry, something went wrong.
No branches or pull requests
I have rtx3090 * 1, rtx 3060 16G * 2, total mem is 24+16 * 2=56G
In this case, is it possible to finetune models?
The text was updated successfully, but these errors were encountered: