-
Notifications
You must be signed in to change notification settings - Fork 90
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Error When Fine Tuning #2
Comments
Thanks for reporting. I think it's an issue caused by Colab preloading a older version of numpy, which is not reloaded after !pip install numpy==1.23.5
import numpy
major, minor = map(float, numpy.__version__.split(".")[:2])
version_float = major + minor / 10**len(str(minor))
print('numpy', version_float)
if version_float < 1.0023:
raise Exception("Restart the runtime by clicking the 'RESTART RUNTIME' button above (or Runtime > Restart Runtime).") |
Thank you, that fixed the issue. As an aside, do you have more information or documentation about the options? There are a lot of different options when fine tuning and it would be helpful to have more documentation, examples and best practices on how to use them. |
Yay, thanks for the update! I tried it myself on Colab and reproduced the issue, and now the Colab Notebook has been updated to handle this. For the options, while I'll add more detailed tooltips or notes about how they work after having a confident understanding of them by myself (don't want to introduce misrepresentation), I would recommend consulting the Hugging Face Transformers doc https://huggingface.co/docs/transformers/v4.28.0/en/main_classes/trainer#transformers.TrainingArguments about the parameters for training (still finding docs about LoRA related parameters...) |
Thank you for creating this. Inference works fine for me, however when attempting to Fine Tune using the Colab version, I get this error:
The text was updated successfully, but these errors were encountered: