Skip to content
This repository has been archived by the owner on Sep 27, 2024. It is now read-only.

Gemma models still not working. #21

Open
rombodawg opened this issue Feb 25, 2024 · 1 comment
Open

Gemma models still not working. #21

rombodawg opened this issue Feb 25, 2024 · 1 comment

Comments

@rombodawg
Copy link

rombodawg commented Feb 25, 2024

Gemma models that have been quantized using Llamacpp are not working. Please look into the issue

error

"llama.cpp error: 'create_tensor: tensor 'output.weight' not found'"

I will open a issue on the llamacpp github aswell addressing this

ggerganov/llama.cpp#5706

System:
Ryzen 5600x
rtx 3080 gpu
b550 motherboard
64gb ddr4 ram
windows 10 OS

@rombodawg
Copy link
Author

rombodawg commented Feb 26, 2024

Im uploading the model files for the merges if anyone wants to do some debugging. Should be in the next 10 hours or so. Sorry slow internet.

Follow the mulit-thread. And check out my model for debugging.

Thread links:
#21
ggerganov/llama.cpp#5706
arcee-ai/mergekit#181
oobabooga/text-generation-webui#5562

https://huggingface.co/rombodawg/Gemme-Merge-Test-7b

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant