Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Tool call not working for Sonnet-3.5 #28790

Open
4 tasks done
HasnainKhanNiazi opened this issue Dec 18, 2024 · 5 comments
Open
4 tasks done

Tool call not working for Sonnet-3.5 #28790

HasnainKhanNiazi opened this issue Dec 18, 2024 · 5 comments
Labels
🤖:bug Related to a bug, vulnerability, unexpected error with an existing feature

Comments

@HasnainKhanNiazi
Copy link

HasnainKhanNiazi commented Dec 18, 2024

Checked other resources

  • This is a bug, not a usage question. For questions, please use GitHub Discussions.
  • I added a clear and detailed title that summarizes the issue.
  • I read what a minimal reproducible example is (https://stackoverflow.com/help/minimal-reproducible-example).
  • I included a self-contained, minimal example that demonstrates the issue INCLUDING all the relevant imports. The code run AS IS to reproduce the issue.

Example Code

@tool
def find_categories(user_query: str):
    """ 
    find_categories tool: Perform a search query to retrieve the top N categories based on the user query.
    Params: user_query: A string containing the user query.
    Returns: list: A list of retrieved categories and their attributes.
    """
    found_categories = find_relevant_categories(user_query)
    return found_categories

class find_categories_Input(BaseModel):
    user_query: str = Field(description="User search query to find the categories")

@tool("find_categories", args_schema=find_categories_Input, return_direct=False)
def find_categories(user_query: str):
    """ 
    find_categories tool: Perform a search query to retrieve the top N categories based on the user query.
    Params: user_query: A string containing the user query.
    Returns: list: A list of retrieved categories and their attributes.
    """
    found_categories = find_relevant_categories(user_query)

    return found_categories

model = ChatAnthropic(model='claude-3-5-sonnet-20240620', temperature=0.6, max_tokens=4096)
class AgentState(TypedDict):
    """The state of the agent."""

    messages: Annotated[Sequence[BaseMessage], add_messages]

model = model.bind_tools(TOOLS)
tools_by_name = {tool.name: tool for tool in TOOLS}

def tool_node(state: AgentState):
    outputs = []
    for tool_call in state["messages"][-1].tool_calls:
        tool_result = tools_by_name[tool_call["name"]].invoke(tool_call["args"])
        outputs.append(
            ToolMessage(
                content=json.dumps(tool_result),
                name=tool_call["name"],
                tool_call_id=tool_call["id"],
            )
        )
    return {"messages": outputs}

def call_model(
    state: AgentState,
    config: RunnableConfig,
):
    system_prompt = SystemMessage(
        system_prompt_new
    )
    # print("Sending this msg to LLM:\n", [system_prompt] + state["messages"])
    response = model.invoke([system_prompt] + state["messages"], config)
    return {"messages": [response]}

def should_continue(state: AgentState):
    messages = state["messages"]
    last_message = messages[-1]
    if not last_message.tool_calls:
        return "end"
    else:
        return "continue"

# Define a new graph
workflow = StateGraph(AgentState)

workflow.add_node("agent", call_model)
workflow.add_node("tools", tool_node)

workflow.set_entry_point("agent")

# We now add a conditional edge
workflow.add_conditional_edges(
    "agent",
    should_continue,
    {
        "continue": "tools",
        "end": END,
    },
)

workflow.add_edge("tools", "agent")

async def run_graph(user_input: str, thread_id: str):
    async with AsyncConnectionPool(conninfo=os.getenv("DB_URI"), max_size=20, kwargs=connection_kwargs) as pool: # this has been updated
        checkpointer = AsyncPostgresSaver(pool)
        await checkpointer.setup()
        
        graph = workflow.compile(checkpointer=checkpointer)
        config = {"configurable": {"thread_id": thread_id}}
        async for event in graph.astream_events(
            {"messages": [HumanMessage(content=user_input)]},
            version = 'v2', stream_mode="values", config=config
        ):
            if "on_chat_model_stream" == event['event']:
                if len(event['data']["chunk"].content) > 0:
                    print(event['data']['chunk'].content, end='', flush=True)

Error Message and Stack Trace (if applicable)

[{'id': 'toolu_01WX7gs7ALFqybEHQzDa5S5K', 'input': {}, 'name': 'find_categories', 'type': 'tool_use', 'index': 1}]

Description

I defined a tool called find_categories in two different ways to test but in the case of claude-3-5-sonnet-20240620 input is always empty. When I use OpenAI GPT4o, it works fine. What could be wrong?

System Info

System Information

OS: Darwin
OS Version: Darwin Kernel Version 24.1.0: Thu Oct 10 21:03:11 PDT 2024; root:xnu-11215.41.3~2/RELEASE_ARM64_T6020
Python Version: 3.10.14 (main, May 6 2024, 14:42:37) [Clang 14.0.6 ]

Package Information

langchain_core: 0.3.25
langchain: 0.3.12
langchain_community: 0.3.12
langsmith: 0.1.145
langchain_anthropic: 0.2.1
langchain_experimental: 0.3.0
langchain_google_genai: 2.0.3
langchain_google_vertexai: 2.0.9
langchain_openai: 0.2.12
langchain_text_splitters: 0.3.3
langchainhub: 0.1.20
langgraph_sdk: 0.1.47
langserve: 0.3.0

Other Dependencies

aiohttp: 3.9.5
anthropic: 0.34.2
anthropic[vertexai]: Installed. No version info available.
async-timeout: 4.0.3
dataclasses-json: 0.6.7
defusedxml: 0.7.1
fastapi: 0.112.0
google-cloud-aiplatform: 1.75.0
google-cloud-storage: 2.19.0
google-generativeai: 0.8.3
httpx: 0.27.0
httpx-sse: 0.4.0
jsonpatch: 1.33
langchain-mistralai: Installed. No version info available.
numpy: 1.26.4
openai: 1.57.2
orjson: 3.10.6
packaging: 24.1
pillow: 10.4.0
pydantic: 2.9.2
pydantic-settings: 2.5.2
PyYAML: 6.0.1
requests: 2.32.3
requests-toolbelt: 1.0.0
SQLAlchemy: 2.0.31
sse-starlette: 1.8.2
tenacity: 8.5.0
tiktoken: 0.7.0
types-requests: 2.32.0.20240712
typing-extensions: 4.12.2

@vbarda
Copy link
Contributor

vbarda commented Dec 18, 2024

could you please reduce the example to the tool definition and model.bind_tools().invoke()? also, this is not a langgraph issue, so will transfer to langchain

@vbarda vbarda transferred this issue from langchain-ai/langgraph Dec 18, 2024
@dosubot dosubot bot added the 🤖:bug Related to a bug, vulnerability, unexpected error with an existing feature label Dec 18, 2024
@keenborder786
Copy link
Contributor

@HasnainKhanNiazi please add a docstring in each of your tool explaining when to use the tool, that should take care of it.

@HasnainKhanNiazi
Copy link
Author

@keenborder786 I believe that's not the issue as I have already included doc string in function definition.

@vbarda
Copy link
Contributor

vbarda commented Dec 18, 2024

By the way, the first example of the tool is unlikely to work since the docstring doesn't conform to google-style docstring. i would also recommend checking tool.get_input_schema() to see the resulting schema

@keenborder786
Copy link
Contributor

keenborder786 commented Dec 19, 2024

@HasnainKhanNiazi Can you please use Structured Tool as follow: https://python.langchain.com/api_reference/core/tools/langchain_core.tools.structured.StructuredTool.html and specify the description, arg-schema more explicitly.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
🤖:bug Related to a bug, vulnerability, unexpected error with an existing feature
Projects
None yet
Development

No branches or pull requests

3 participants