You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hello, if I want to use llama-13B's pth for fine-tuning, what changes need to be made to the train.sh script? After fine-tuning according to the parameters of llama-7B, the accuracy is very low.
The text was updated successfully, but these errors were encountered:
In our experiments, I changed --adapter_layer from 32 to 40, and you may also decrease the learning rate.
Hello, after I adjusted --adapter_layer to 40, I changed the learning rate to 9e-3, but your result in the double line can only reach 65%.I don't know what I did wrong
Hello, if I want to use llama-13B's pth for fine-tuning, what changes need to be made to the train.sh script? After fine-tuning according to the parameters of llama-7B, the accuracy is very low.
The text was updated successfully, but these errors were encountered: