You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The finetuning notebook uses 1 GPU and LoRA technique to fine tune a T5 model with 3B parameters. The task to be completed in this issue is to fine tune the same model (or 7B version of the model) on multiple GPU nodes. Use Instascale and Codeflare to schedule the training job and retrieve the finetuned model. Create a notebook that demos this.
The text was updated successfully, but these errors were encountered:
Shreyanand
changed the title
How can we fine tune small models with limited resources, and can we use ray?
Distributed fine tuning of LLMs
Jul 13, 2023
The finetuning notebook uses 1 GPU and LoRA technique to fine tune a T5 model with 3B parameters. The task to be completed in this issue is to fine tune the same model (or 7B version of the model) on multiple GPU nodes. Use Instascale and Codeflare to schedule the training job and retrieve the finetuned model. Create a notebook that demos this.
The text was updated successfully, but these errors were encountered: