Replies: 3 comments 4 replies
-
You will need to start from the |
Beta Was this translation helpful? Give feedback.
1 reply
-
@eiz made a script to patch the ggml models by merging in the tokenizer with its weights: #324 (comment) |
Beta Was this translation helpful? Give feedback.
3 replies
-
Here are the both files which I issued in #324 for comparison. |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Now with the merged PR #252 I need to reconvert the model.
Does this mean, I need to reconvert the base models consolidated.0.pth or can I reconvert the quantized models too?
I'm currently working with the 7B alpaca Lora 4bit and the 13B alpaca lora model 4bit and would like to continue working with it.
I don't have the ressources to:
If I could reconvert my quantized alpaca lora models will be amazing.
If not, this would be a breaking change (as digital nomade often slow wifi and limited ressources)
It will be amazing, when someone could clarify.
Thank you
Beta Was this translation helpful? Give feedback.
All reactions