-
-
Notifications
You must be signed in to change notification settings - Fork 32.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add the Model Context Protocol integration #135058
base: dev
Are you sure you want to change the base?
Conversation
|
||
async def _async_setup(self) -> None: | ||
"""Set up the client connection.""" | ||
self.ctx_mgr = mcp_client(self.config_entry.data[CONF_URL]) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is there a better way to manage the lifecycle of a contextmanager client library?
I'm essentially having it try to mange the lifecycle manually here to start/end the context when the coordinator starts/ends.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The only alternative I could think of, and I think we do this one for Matter server or Z-Wave server (forgot which):
async def _connect(self, connection_established: asyncio.Event) -> None:
try:
async with mcp_client(url) as session:
self.session = session
connection_established.set()
finally:
self.session = None
async def _async_setup(self) -> None:
connection_established = asyncio.Event()
self.config_entry.async_create_background_task(self._connect(connection_established))
async with timeout(10):
await connection_established.wait()
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
And I guess you can keep the connection task around to cancel it for other reasons as well.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please take a look at the requested changes, and use the Ready for review button when you are done, thanks 👍 |
Co-authored-by: Joost Lekkerkerker <[email protected]>
Co-authored-by: Joost Lekkerkerker <[email protected]>
"""Initialize the tool.""" | ||
self.name = tool.name | ||
self.description = tool.description | ||
self.parameters = convert_to_voluptuous(tool.inputSchema) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
When this raises, should we somewhere log the input schema that brought it down? Currently it will bubble all the way up to the conversation entity I think?
Co-authored-by: Paulus Schoutsen <[email protected]>
Proposed change
Add the Model Context Protocol integration, which is a client.
See https://modelcontextprotocol.io/introduction#general-architecture for background on how MCP works. This integration is the opposite of #134122 and acts as a client.
The initial integration adds support for LLM tools only (does not yet include prompts or resources, where we don't actually have a way to support them generally). This registers a new LLM API that exposes tools returned by the remote MCP server. The LLM API can be consumed by another conversation agent integration.
Type of change
Additional information
Checklist
ruff format homeassistant tests
)If user exposed functionality or configuration variables are added/changed:
If the code communicates with devices, web services, or third-party tools:
Updated and included derived files by running:
python3 -m script.hassfest
.requirements_all.txt
.Updated by running
python3 -m script.gen_requirements_all
.To help with the load of incoming pull requests: