Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Loss increases during second fine-tuning phase #385

Open
jackatls opened this issue Jan 2, 2025 · 0 comments
Open

Loss increases during second fine-tuning phase #385

jackatls opened this issue Jan 2, 2025 · 0 comments

Comments

@jackatls
Copy link

jackatls commented Jan 2, 2025

This is the result after my first fine-tuning.
92544a92701a478d1f74ae22f7859cb7
This is the second fine-tuning, where the checkpoint saved after the first fine-tuning was loaded. The loss starts lower than the loss value from the first fine-tuning but gradually increases.
4f402db7f7fd84853bbce42d0ebe01d2
Could it be an issue with the loading process, such as certain optimizer states not being properly restored? I load the saved checkpoint by
trainer.train(checkpoint=lastCheckpointDir)

I would greatly appreciate your help!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant