Using Llama.cpp as a dependency in another c++ project. #7631
Replies: 3 comments 3 replies
-
You will need to compile llama.cpp with |
Beta Was this translation helpful? Give feedback.
-
You're right, I meant for a shared build. I haven't been able to get the static build to work, it seems the llama.cpp build currently produces a mix of static and shared libraries, and a static build requires all library files to be built using Using a static build means you are packaging all the library code within a single exe file, making it a much larger file but you don't have to bring the library files along with the exe. And using a shared build means you have to bring library files along with the exe but the exe is much smaller in size. Using As for the C++ version, I'm using C++23 and it's working fine linking with the library files compiled with C++11. |
Beta Was this translation helpful? Give feedback.
-
That's right, use
I have a script extender plugin for SFSE, take a look if you want as it would be similar for Skyrim/FO4: https://github.com/dranger003/sfse_plugin_console_api/blob/master/inc/plugin.h |
Beta Was this translation helpful? Give feedback.
-
I am currently working on a project where I would like to integrate llama.cpp into a game as a dependency. I have a background in python, but I am still pretty new to c++.
As far as I understand, in order to use llama.cpp as a dependency, I need to specify which headers and which source (.cpp) files to include in my xmake.lua file (or CMakeLists.txt).
Based on main.cpp and simple.cpp I only need to include
"common.h"
and"llama.h"
. However, there are many more folders in the project whose headers are not mentioned in these examples.If I wish to use llama.cpp as a dependency, are these two headers enough for my project? If I wish to ship a .dll file, how do I handle the cuda files if I did not mention the ggml-cuda.h header within my xmake.lua/CMakeLists.txt file?
Thanks
Beta Was this translation helpful? Give feedback.
All reactions