-
Hello! 👋 I'm quite new to the LangChain tools, but I have been impressed with how easily I can set up models with endpoints using LangServe. Summary of ProblemI'm having difficulties deploying an Detailed ProblemFrom reading the docs, LangServe can help deploy LangChain chains and runnables as a REST API. I've had success using LangChain to deploy Chat models using the below code: from fastapi import FastAPI
from langchain_openai import ChatOpenAI
from langserve import add_routes
app = FastAPI(
title="Title",
description="a description",
version="0.1.0",
summary="This is a summary of this app",
)
model = ChatOpenAI(openai_api_key="<OPENAI_API_KEY>")
add_routes(
app,
model,
path="/openai",
) However, I'm running into issues when trying to experiment with OpenAI's Assistant APIs using LangChain/LangServe. Specifically, the Here's the code with the problem and I'll provide my data-scrubbed call stack afterward: from fastapi import FastAPI
from langchain.agents.openai_assistant import OpenAIAssistantRunnable
from langserve import add_routes
app = FastAPI(
title="Title",
description="a description",
version="0.1.0",
summary="This is a summary of this app",
)
# Doesn't matter if `as_agent` is true or not - still have the same call stack error
agent = OpenAIAssistantRunnable(assistant_id="<ASSISTANT_ID>", as_agent=True)
add_routes(
app,
agent,
path="/assistant",
) Call Stack
|
Beta Was this translation helpful? Give feedback.
Replies: 3 comments 4 replies
-
Looks like a bug at first sight, I'll investigate and if a bug will create an issue. I suspect an issue on the langchain side |
Beta Was this translation helpful? Give feedback.
-
Issue due to libraries using mixture of pydantic versions probably. Easy to trigger issues like this: from langchain.agents.openai_assistant import OpenAIAssistantRunnable
agent = OpenAIAssistantRunnable(assistant_id="<ASSISTANT_ID>", as_agent=True)
print(agent.schema()) Specify input and output types as way to resolve: from langchain.agents.openai_assistant import OpenAIAssistantRunnable
agent = OpenAIAssistantRunnable(assistant_id="<ASSISTANT_ID>", as_agent=True).with_types(
output_type=Union[
List[OpenAIAssistantAction],
OpenAIAssistantFinish,
]
)
add_routes(
app,
agent,
path="/assistant",
) There might still be serialization issues since the types are not registered as "well known types" (https://github.com/langchain-ai/langserve/blob/main/langserve/serialization.py#L54) So the RemoteRunnable client may not be able to de-serialize the types and instead just keep them as dicts (which might be fine for you). Now, serialization will likely not be relevant if used with an AgentExecutor (but looks like some other issue with the executor). |
Beta Was this translation helpful? Give feedback.
-
Here's an example with an agent executor: from fastapi import FastAPI
from langchain.agents import AgentExecutor
from langchain.agents.openai_assistant import OpenAIAssistantRunnable
from langchain.tools import tool
from typing_extensions import TypedDict
from langserve import add_routes
app = FastAPI(
title="Title",
description="a description",
version="0.1.0",
summary="This is a summary of this app",
)
@tool
def favorite_animal(name: str) -> str:
"""Get the favorite animal of the person with the given name"""
if name.lower().strip() == "eugene":
return "cat"
return "dog"
tools = [favorite_animal]
runnable = OpenAIAssistantRunnable.create_assistant(
"scary-cat",
"meow in a scary way",
tools=tools,
as_agent=True,
model="gpt-4-1106-preview",
)
print(runnable.assistant_id)
agent_executor = AgentExecutor(agent=runnable, tools=tools)
class Input(TypedDict):
content: str
add_routes(
app, agent_executor.with_types(input_type=Input, output_type=str), path="/agent"
)
if __name__ == "__main__":
import uvicorn
uvicorn.run(app) The example above works. Playground output is still ugly for agents. Feel free to modify, you may want to propagate the thread id back to the client if you want repeated interaction. |
Beta Was this translation helpful? Give feedback.
Here's an example with an agent executor: