You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Flagscale only support the fine-tuning for all parameters for now, but we plan to incorporate the LoRA. Could you provide more detailed requirements so that we can take them into consideration for the future implementation?
Do model parallelism and pipeline parallelism support efficient fine-tuning methods such as Lora
The text was updated successfully, but these errors were encountered: