Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Pre-compiled llama.cpp #263

Open
CMorrison82z opened this issue Jan 26, 2024 · 1 comment
Open

Pre-compiled llama.cpp #263

CMorrison82z opened this issue Jan 26, 2024 · 1 comment

Comments

@CMorrison82z
Copy link

I'm using an AMD graphics card and had to compile llama.cpp with additional flags and parameters.

From what I can see, llm-chain-llama attempts to compile the llama.cpp module itself, which I imagine is unlikely to produce a suitable result for me.

Is there any way I can use this library with a pre-compiled llama.cpp binary ?

@williamhogman
Copy link
Contributor

If I recall correctly llama.cpp don't provide ready made dynamic libraries to link with so in this case it is hard for us to offer anything along those lines

Any ideas?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants