Feedback about local LLM usage with paper-qa (share your experience about different LLMs and their parametrization). #753
Snikch63200
started this conversation in
General
Replies: 1 comment
-
As an example :
|
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
After a lot of trials with different local LLMs and different parameters (also called 'options'), I decided to open this discussion to share experiences about this.
My aim is to compare performances of different local models, with different configurations.
This is not a discussion about configuration issues but configuration optimization.
This is not, stricto sensu, a discussion about paper-qa settings
For each trial thanks to precise :
answer_max_sources
,evidence_k
andmax_concurrent_requests
that have a major impact on speed and answer relevance.New ideas for standardization of tests are welcome.
Beta Was this translation helpful? Give feedback.
All reactions