You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
-
Hello,
I am building a RAG with Llama-cpp-python and langchain LlamaCpp for a few hundred PDFs of scientific information and a few GPUs.
I have tried optimizing the parameters of the LLM to my best knowledge based on information online.
I was wondering if those parameters would seem appropriate for the intended purpose of interrogating a large set of data?
Loading the model as such with parameters:
Parameters loaded:
Beta Was this translation helpful? Give feedback.
All reactions