Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support Lora or other peft #1

Open
2033329616 opened this issue Oct 31, 2023 · 3 comments
Open

Support Lora or other peft #1

2033329616 opened this issue Oct 31, 2023 · 3 comments
Labels

Comments

@2033329616
Copy link

Do model parallelism and pipeline parallelism support efficient fine-tuning methods such as Lora

@aoyulong
Copy link
Contributor

aoyulong commented Oct 31, 2023

Flagscale only support the fine-tuning for all parameters for now, but we plan to incorporate the LoRA. Could you provide more detailed requirements so that we can take them into consideration for the future implementation?

@2033329616
Copy link
Author

2033329616 commented Nov 2, 2023

PEFT includes various fine-tuning methods, such as Lora, which is better if the project is compatible with PEFT.

Copy link

github-actions bot commented Jan 1, 2024

Marking as stale. No activity in 60 days.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

2 participants