Pydantic integration with a Storagecontext and Query Engine #15206
Replies: 2 comments 6 replies
-
Hey @theholymath, I'm here to help you with any bugs, questions, and contributions you might have. Let's tackle this together! To resolve the Here's how you can modify your code to include a tool call:
Here's an example: from llama_index.core.llms.function_calling import ToolSelection
from llama_index.core.types import ChatMessage, MessageRole
from llama_index.legacy.tools.query_engine import QueryEngineTool, ToolMetadata
import json
# Example function to create a tool call
def create_tool_call():
return {
"input": json.dumps({"param1": "value1"}), # Example input
"id": "tool_id_1",
"name": "example_tool",
"type": "tool_use"
}
# Example usage in your chat function
def chat_with_tools(
self,
tools: List["BaseTool"],
user_msg: Optional[Union[str, ChatMessage]] = None,
chat_history: Optional[List[ChatMessage]] = None,
verbose: bool = False,
allow_parallel_tool_calls: bool = False,
**kwargs: Any,
) -> ChatResponse:
chat_history = chat_history or []
if isinstance(user_msg, str):
user_msg = ChatMessage(role=MessageRole.USER, content=user_msg)
chat_history.append(user_msg)
tool_dicts = []
for tool in tools:
tool_dicts.append(
{
"name": tool.metadata.name,
"description": tool.metadata.description,
"input_schema": tool.metadata.get_parameters_dict(),
}
)
response = self.chat(chat_history, tools=tool_dicts or None, **kwargs)
# Ensure at least one tool call is present
if not response.message.additional_kwargs.get("tool_calls"):
response.message.additional_kwargs["tool_calls"] = [create_tool_call()]
if not allow_parallel_tool_calls:
force_single_tool_call(response)
return response Additionally, you can use the from llama_index.legacy.tools.query_engine import QueryEngineTool, ToolMetadata
from llama_index.legacy.core.base_query_engine import BaseQueryEngine
# Create a QueryEngineTool instance
query_engine = BaseQueryEngine() # Replace with your actual query engine instance
metadata = ToolMetadata(name="example_tool", description="Example tool description")
query_engine_tool = QueryEngineTool(query_engine=query_engine, metadata=metadata)
# Example function to create a tool call using QueryEngineTool
def create_tool_call():
return {
"input": json.dumps({"param1": "value1"}), # Example input
"id": "tool_id_1",
"name": query_engine_tool.metadata.name,
"type": "tool_use"
} This ensures that the response always contains at least one tool call, thus preventing the |
Beta Was this translation helpful? Give feedback.
-
You can try using llama-index version 0.10.56. Additionally, version 0.10.56 supports gpt-4o-mini and gpt-4o models. |
Beta Was this translation helpful? Give feedback.
-
I have code that I feel should work but I keep getting a
tool
error. I know the vector store is written and populated. When I query without passing the pydantic class it works just fine.This results in a
ValueError: Expected at least one tool call, but got 0 tool calls.
error.When I simply use
index.as_query_engine(llm=chat_llm)
it works fine.I am trying to fill a predefined pydantic data model. I am loading
from llama_index.core.bridge.pydantic import BaseModel, Field
as well.Beta Was this translation helpful? Give feedback.
All reactions