We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
from typing import TypedDict, List, Union, Dict from langchain_openai import ChatOpenAI from langchain_core.prompts import ChatPromptTemplate from langgraph.graph import END, StateGraph from langchain_core.output_parsers.openai_tools import JsonOutputToolsParser from langchain_core.prompts import ChatPromptTemplate from langchain_core.pydantic_v1 import BaseModel, Field from langchain_openai import ChatOpenAI from dotenv import load_dotenv load_dotenv() def _combine_documents( docs:List[str], document_separator: str = "\n\n" ): formatted = [ f"Source ID: {i}\n Source Content: {doc}" for i, doc in enumerate(docs) ] return document_separator.join(formatted) system_template = """You are a useful assistant. Reply to the question using the context provided. # CONTEXT: {context}""" class CitedAnswer(BaseModel): """Reply to the question citing the sources provided. Be detailed and organized in your responses.""" answer: str = Field( ..., description="The answer to the question", ) citations: List[int] = Field( ..., description="The ids of the sources that justify the answer.", ) prompt = ChatPromptTemplate.from_messages( [ ("system", system_template), ("human", "{query}"), ] ) model = ChatOpenAI( name = "answerer_llm", model = "gpt-3.5-turbo", temperature = 0, streaming = True, ) model = model.bind_tools( [CitedAnswer], tool_choice = "CitedAnswer", ) output_parser = JsonOutputToolsParser(diff=False).with_config( {"run_name": "answerer_parser"} ) chain = prompt | model | output_parser # GRAPH class GraphState(TypedDict): query: str sources: List[str] response: Dict async def generate( state: GraphState, config ) -> Dict[ str, Union[ str, List[int] ] ]: """ Generates a response. Args: state (messages): The current state of the agent. Returns: dict: The output key is filled. """ output = await chain \ .ainvoke( { "query": state["query"], "context": _combine_documents(state["sources"]) }, config = config ) return {"response": output} ## Graph graph = StateGraph(GraphState) graph.add_node("generate", generate) graph.set_entry_point("generate") graph.add_edge("generate", END) compiled_graph = graph.compile()
No response
async for event in chain.astream_events({"query": "my question", "context": _combine_documents(my_sources)}, version="v1"): print(event['event'])
The parser correctly streams. In particular I see a pattern of events like
on_chain_start on_prompt_start on_prompt_end on_chat_model_start on_chat_model_stream on_parser_start on_parser_stream on_chain_stream on_chat_model_stream on_parser_stream on_chain_stream on_chat_model_stream on_chat_model_stream on_parser_stream on_chain_stream on_chat_model_stream on_parser_stream on_chain_stream
async for event in compiled_graph.astream_events({"query": "my question", "sources": my_sources}, version="v1"): print(event['event'])
I expect the same events being streamed. Instead, the parser is not streaming and I see a pattern of events like:
on_chain_start on_chain_start on_chain_end on_chain_start on_chain_start on_chain_start on_prompt_start on_prompt_end on_chat_model_start on_chat_model_stream on_chat_model_stream on_chat_model_stream on_chat_model_stream on_chat_model_stream ... on_chat_model_stream on_chat_model_end on_parser_start on_parser_end on_chain_end on_chain_stream on_chain_end on_chain_start on_chain_end on_chain_stream on_chain_end on_chain_stream on_chain_end
langchain==0.1.14 langchain-cli==0.0.21 langchain-community==0.0.31 langchain-core==0.1.41 langchain-openai==0.1.1 langchain-text-splitters==0.0.1 langgraph==0.0.34
MacOS 14.2.1 python 3.9.16
The text was updated successfully, but these errors were encountered:
Any news here? The issue seems to still be present
Sorry, something went wrong.
No branches or pull requests
Checked other resources
Example Code
Error Message and Stack Trace (if applicable)
No response
Description
The parser correctly streams. In particular I see a pattern of events like
I expect the same events being streamed. Instead, the parser is not streaming and I see a pattern of events like:
System Info
langchain==0.1.14
langchain-cli==0.0.21
langchain-community==0.0.31
langchain-core==0.1.41
langchain-openai==0.1.1
langchain-text-splitters==0.0.1
langgraph==0.0.34
MacOS 14.2.1
python 3.9.16
The text was updated successfully, but these errors were encountered: