-
Notifications
You must be signed in to change notification settings - Fork 371
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Iterating over LLM models does not work in LangChain #28
Comments
I have similar situation, here where I stored my # Create an instance of OpenAI LLM with desired configuration
llm_davinci = OpenAI(
model_name=models_names["completions-davinci"],
temperature=0,
max_tokens=256,
top_p=1.0,
frequency_penalty=0.0,
presence_penalty=0.0,
n=1,
best_of=1,
request_timeout=None
) then I am using the def ask_llm(query: str, filename: str):
# prepare the prompt
prompt = code_assistance.format(context="this is a test", command=query)
tokens = tiktoken_len(prompt)
print(f"prompt : {prompt}")
print(f"prompt tokens : {tokens}")
# connect to the LLM
llm_chain = LLMChain(prompt=prompt, llm=llm_davinci)
# run the LLM
with get_openai_callback() as cb:
response = llm_chain.run()
return jsonify({'query': query,
'response': str(response),
'usage': cb}) the issue is with line : # connect to the LLM
llm_chain = LLMChain(prompt=prompt, llm=llm_davinci) error : any idea to solve this ? |
@TechnoRahmon in my case it was confusing with "prompt" variable... try changing "prompt" inside ask_llm() to something else like "llm_prompt" |
@yogeshhk Thank you for replying. Actually, it has been solved by feeding the prompt as PromptTemplate type to the # prepare the prompt
prompt = PromptTemplate(
input_variables=give_assistance_input_variables,
template=give_assistance_prompt
)
# connect to the LLM
llm_chain = LLMChain(prompt=prompt, llm=llm_davinci) |
Can LLMChain objects be stored and iterated over?
The first LLM model runs well, but for the second iteration, gives following error:
Am I missing something? in dictionary declarations?
More details at https://stackoverflow.com/questions/76110329/iterating-over-llm-models-does-not-work-in-langchain
The text was updated successfully, but these errors were encountered: