You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Enable multi-stage fine-tuning of NLLB, by setting the "model" parameter in the config file to a previously trained model, rather than the base huggingface model.
Damien (slack): "Our old OpenNMT implementation supported parent models and multi-stage finetuning. I guess we never implemented it for Huggingface. You can even see that there is a has_parent property on the Config class that isn't used for anything after we removed OpenNMT support. You would configure it by setting the data/parent setting to the experiment name. Huggingface transformers definitely supports fine tuning from a local checkpoint, so we should be able to implement it."
The text was updated successfully, but these errors were encountered:
laura-burdick-sil
changed the title
Add ability to use a different experiment folder's checkpoints as the base model when running an silnlp experiment on clearml
Use a different experiment folder's checkpoints as the base model when running an silnlp experiment on clearml
Jan 8, 2025
Enable multi-stage fine-tuning of NLLB, by setting the "model" parameter in the config file to a previously trained model, rather than the base huggingface model.
Damien (slack): "Our old OpenNMT implementation supported parent models and multi-stage finetuning. I guess we never implemented it for Huggingface. You can even see that there is a
has_parent
property on theConfig
class that isn't used for anything after we removed OpenNMT support. You would configure it by setting thedata/parent
setting to the experiment name. Huggingface transformers definitely supports fine tuning from a local checkpoint, so we should be able to implement it."The text was updated successfully, but these errors were encountered: