Skip to content

Can madlad400 gguf models from huggingface be used? #8300

Answered by misutoneko
OkLunixBSD asked this question in Q&A
Discussion options

You must be logged in to vote

Okay I got it working now (I think), and man it feels FAST!!

The bad news is that my GGUF conversion procedure from jbochi => llama.cpp was quite a messy business indeed.
It involved conjuring up an empty GGUF, filling it with metadata and doing some frankensteining with KerfuffleV2's gguf-tools.
I also wrote a custom script to rename the tensors, and llama.cpp itself needed a teeny weeny change too.
The upside of this method is that the quantized tensors remain untouched.

I can give more details if there's interest but somehow I feel there must be a better way :D

EDIT: I've now managed to polish the conversion process a little bit, so that no llama.cpp customization is necessary any longer.

Replies: 7 comments 4 replies

Comment options

You must be logged in to vote
0 replies
Comment options

You must be logged in to vote
2 replies
@OkLunixBSD
Comment options

@misutoneko
Comment options

Answer selected by OkLunixBSD
Comment options

You must be logged in to vote
1 reply
@misutoneko
Comment options

Comment options

You must be logged in to vote
0 replies
Comment options

You must be logged in to vote
0 replies
Comment options

You must be logged in to vote
0 replies
Comment options

You must be logged in to vote
1 reply
@steampunque
Comment options

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Category
Q&A
Labels
None yet
4 participants