Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Use a different experiment folder's checkpoints as the base model when running an silnlp experiment on clearml #623

Open
laura-burdick-sil opened this issue Jan 8, 2025 · 0 comments
Assignees
Labels
enhancement New feature or request research Research topics

Comments

@laura-burdick-sil
Copy link
Collaborator

Enable multi-stage fine-tuning of NLLB, by setting the "model" parameter in the config file to a previously trained model, rather than the base huggingface model.

Damien (slack): "Our old OpenNMT implementation supported parent models and multi-stage finetuning. I guess we never implemented it for Huggingface. You can even see that there is a has_parent property on the Config class that isn't used for anything after we removed OpenNMT support. You would configure it by setting the data/parent setting to the experiment name. Huggingface transformers definitely supports fine tuning from a local checkpoint, so we should be able to implement it."

@laura-burdick-sil laura-burdick-sil added enhancement New feature or request research Research topics labels Jan 8, 2025
@laura-burdick-sil laura-burdick-sil changed the title Add ability to use a different experiment folder's checkpoints as the base model when running an silnlp experiment on clearml Use a different experiment folder's checkpoints as the base model when running an silnlp experiment on clearml Jan 8, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request research Research topics
Projects
None yet
Development

No branches or pull requests

2 participants