How to pass multiple parameters to a prompt #16421
Replies: 8 comments 6 replies
-
Would this be helpful in debugging? from operator import itemgetter
from langchain_core.documents import Document
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.runnables import RunnableLambda, RunnableParallel
PROMPT = """This is a fake prompt...
Context:
{context}
Query: {query}
Result:"""
def print_me(inputs):
print('Printing!!')
print(inputs)
return inputs
def fake_retriever(inputs):
query = inputs['query']
return [Document(page_content=query[::-1]), Document(page_content='goodbye')]
prompt = ChatPromptTemplate.from_template(PROMPT)
chain = RunnableLambda(print_me) | RunnableParallel({
'context': RunnableLambda(fake_retriever),
'query': itemgetter('query')
}) | RunnableLambda(print_me) | prompt
result = chain.invoke({'query': 'hello'}) |
Beta Was this translation helpful? Give feedback.
-
@eyurtsev not really, doesn't print anything and I'm getting the same error. Seems that there is a lot of different docs around passing variables to prompts, what would be the latest / most up to date way of doing it? EDIT: just to clarify, this does not work when I try to adapt it to my code. Running the snippet above directly works and just displays the following:
However, I'm not sure how I'm suppose to tailor that to my specific implementation. |
Beta Was this translation helpful? Give feedback.
-
After debugging this a bit more, it seems that this could be coming from my retriever? The error is raised with the following in my chain: | RunnableParallel({"var_a": itemgetter("var_a"), "var_b": itemgetter("var_b"), "context": retriever, "query": itemgetter("query")}) However, using the fake retriever defined above it seems to work fine: def fake_retriever(inputs):
query = inputs['query']
return [Document(page_content=query[::-1]), Document(page_content='goodbye')]
...
| RunnableParallel({"var_a": itemgetter("var_a"), "var_b": itemgetter("var_b"), "context": RunnableLambda(fake_retriever), "query": itemgetter("query")}) What confuses me is that my old approach (without using variables) was working fine with that retriever: {"context": retriever, "question": RunnablePassthrough()}
| prompt
...
chain.invoke("This is my query") On top of that, adding the debug function shows the retriever working correctly when I use this simple chain: {"context": retriever, "question": RunnablePassthrough()}
| RunnableLambda(print_me) | prompt
...
chain.invoke("This is my query")
# This shows context in the print debug logs So somehow adding simple string variables to this prompt breaks it. |
Beta Was this translation helpful? Give feedback.
-
I managed to make it work like this (I've also added the example condition which is similar in my use case): PROMPT="""This is a fake prompt...
Variable_a: {var_a}
Variable_b: {var_b}
Context:
{context}
Query: {query}
Result:"""
vectorstore = FAISS.load_local(data_path, embeddings=OpenAIEmbeddings())
retriever = vectorstore.as_retriever()
# prompt = PromptTemplate(input_variables=["var_a","var_b","context", "query"], template=PROMPT)
prompt = ChatPromptTemplate.from_template(PROMPT)
model = ChatOpenAI()
var_a = ""
var_b = ""
myConditionCheck = True
if myConditionCheck:
var_a = "foo"
var_b = "bar"
def get_vara(_):
return var_a
def get_varb(_):
return var_b
chain = (
{"context": retriever, "query": RunnablePassthrough(), "var_a": RunnableLambda(get_vara), "var_b": RunnableLambda(get_varb)}
| prompt
| model
| StrOutputParser()
)
query = "this is a fake query"
response = rag_chain.invoke(query) This will do for now but this doesn't seem to be very straightforward, I wonder if there is a better way to achieve this? |
Beta Was this translation helpful? Give feedback.
-
The retriever .invoke interface expects a string query rather than a dict. retriever.invoke(query) I've included a working LCEL variant of the original code together with non LCEL variants to helps see other ways of writing / debugging the code. I like to sprinkle Also keep in mind LCEL is an optimization step, if you don't need the optimization you can write the code in an imperative style (as shown below). from operator import itemgetter
from langchain_community.vectorstores import FAISS
from langchain_core.output_parsers import StrOutputParser
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.runnables import RunnableLambda, RunnablePassthrough
from langchain_openai import ChatOpenAI, OpenAIEmbeddings
from langchain_core.runnables import RunnableLambda
PROMPT = """This is a fake prompt...
Variable_a: {var_a}
Variable_b: {var_b}
Context:
{context}
Query: {query}
Result:"""
prompt = ChatPromptTemplate.from_template(PROMPT)
model = ChatOpenAI()
def print_me(inputs):
print(f'Inputs: {inputs}')
return inputs
print_me = RunnableLambda(print_me)
vectorstore = FAISS.from_texts(
["harrison worked at kensho"], embedding=OpenAIEmbeddings()
)
retriever = vectorstore.as_retriever() Using LCEL (Declarative)chain = (
{"var_a": itemgetter("var_a"), "var_b": itemgetter("var_b"), "context": (print_me | itemgetter('query') | print_me | retriever), "query": itemgetter("query")}
| prompt
| model
| StrOutputParser()
)
query = "where did harrison work"
response = chain.invoke({"var_a": "foo", "var_b": "bar", "query": query}) Imperative style (no LCEL)This uses an imperative style without any LCEL. It loses some benefits from LCEL, but it's easier to get started with. def invoke_rag(inputs):
query = inputs['query'] # <-- Must unpack the dictionary to extract the query
documents = retriever.invoke(query)
model_input = prompt.invoke({'query': query, 'var_a': inputs['var_a'], 'var_b': inputs['var_b'], 'context': documents})
model_output = model.invoke(model_input)
parser = StrOutputParser()
parsed_output = parser.invoke(model_output)
return parsed_output
invoke_rag({"var_a": "foo", "var_b": "bar", "query": query}) Bridging Imperative and LCELHere's a version of the imperative style that bridges between non lcel and the lcel versions. Once things are written this way with LCEL, there's no need to specify the names of the variables or explictly call def invoke_rag(inputs):
retriever_output = {
'context': retriever.invoke(inputs['query']),
'query': inputs['query'],
'var_a': inputs['var_a'],
'var_b': inputs['var_b'],
}
print(f"Output from retriever: {retriever_output}")
model_input = prompt.invoke(retriever_output)
print(f"Output from prompt: {model_input}")
model_output = model.invoke(model_input)
print(f"Output from model: {model_output}")
parser = StrOutputParser()
parsed_output = parser.invoke(model_output)
print(f"Output from parser {parsed_output}")
return parsed_output
invoke_rag({"var_a": "foo", "var_b": "bar", "query": query}) |
Beta Was this translation helpful? Give feedback.
-
@eyurtsev thanks for the update, overall I prefer the LCEL approach as I found it clearer and more composable but using the provided example just doesn't work in my case. I've tested with the imperative style and it works properly. Do you see any issue with my working example of LCEL + RunnableLambda usage ? Even though imperative style works, I prefer the declarative style of LCEL for the chain declaration. |
Beta Was this translation helpful? Give feedback.
-
I've got it working, based on my original submitted code you need to use this:
Instead of:
This allows me to use I guess this This is documented properly so excuse my confusion. This confusion is mostly coming from the fact that I was using the following form before trying to inject the extra
|
Beta Was this translation helpful? Give feedback.
-
Forgive me for asking what may be a dumb question, I'm a newbie to LC… But why are we doing variable replacement like that in such a verbose way instead of something more straightforward? For the purposes of using LCEL?
|
Beta Was this translation helpful? Give feedback.
-
I've been trying multiple times to pass parameters to my prompt in a chain using LCEL but I'm always facing a form of type issue at some point.
I've tried to follow the first example provided in https://python.langchain.com/docs/expression_language/cookbook/retrieval but either it is out of date or it's just not working.
Here's my code:
I'm getting the following error:
Beta Was this translation helpful? Give feedback.
All reactions