diff --git a/docs/docs/integrations/chat/anthropic.ipynb b/docs/docs/integrations/chat/anthropic.ipynb
index 0120e7f0442a1..805b65b5beab6 100644
--- a/docs/docs/integrations/chat/anthropic.ipynb
+++ b/docs/docs/integrations/chat/anthropic.ipynb
@@ -2,7 +2,7 @@
"cells": [
{
"cell_type": "raw",
- "id": "a016701c",
+ "id": "afaf8039",
"metadata": {},
"source": [
"---\n",
@@ -12,383 +12,203 @@
},
{
"cell_type": "markdown",
- "id": "bf733a38-db84-4363-89e2-de6735c37230",
+ "id": "e49f1e0d",
"metadata": {},
"source": [
"# ChatAnthropic\n",
"\n",
- "This notebook covers how to get started with Anthropic chat models.\n",
+ "This notebook provides a quick overview for getting started with Anthropic [chat models](/docs/concepts/#chat-models). For detailed documentation of all ChatAnthropic features and configurations head to the [API reference](https://api.python.langchain.com/en/latest/chat_models/langchain_anthropic.chat_models.ChatAnthropic.html).\n",
+ "\n",
+ "Anthropic has several chat models. You can find information about their latest models and their costs, context windows, and supported input types in the [Anthropic docs](https://docs.anthropic.com/en/docs/models-overview).\n",
+ "\n",
+ "\n",
+ ":::info AWS Bedrock and Google VertexAI\n",
+ "\n",
+ "Note that certain Anthropic models can also be accessed via AWS Bedrock and Google VertexAI. See the [ChatBedrock](/docs/integrations/chat/bedrock/) and [ChatVertexAI](/docs/integrations/chat/google_vertex_ai_palm/) integrations to use Anthropic models via these services.\n",
+ "\n",
+ ":::\n",
+ "\n",
+ "## Overview\n",
+ "### Integration details\n",
+ "\n",
+ "| Class | Package | Local | Serializable | [JS support](https://js.langchain.com/v0.2/docs/integrations/chat/anthropic) | Package downloads | Package latest |\n",
+ "| :--- | :--- | :---: | :---: | :---: | :---: | :---: |\n",
+ "| [ChatAnthropic](https://api.python.langchain.com/en/latest/chat_models/langchain_anthropic.chat_models.ChatAnthropic.html) | [langchain-anthropic](https://api.python.langchain.com/en/latest/anthropic_api_reference.html) | ❌ | beta | ✅ | ![PyPI - Downloads](https://img.shields.io/pypi/dm/langchain-anthropic?style=flat-square&label=%20) | ![PyPI - Version](https://img.shields.io/pypi/v/langchain-anthropic?style=flat-square&label=%20) |\n",
+ "\n",
+ "### Model features\n",
+ "| [Tool calling](/docs/how_to/tool_calling/) | [Structured output](/docs/how_to/structured_output/) | JSON mode | [Image input](/docs/how_to/multimodal_inputs/) | Audio input | Video input | [Token-level streaming](/docs/how_to/chat_streaming/) | Native async | [Token usage](/docs/how_to/chat_token_usage_tracking/) | [Logprobs](/docs/how_to/logprobs/) |\n",
+ "| :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: |\n",
+ "| ✅ | ✅ | ❌ | ✅ | ❌ | ❌ | ✅ | ✅ | ✅ | ❌ | \n",
"\n",
"## Setup\n",
"\n",
- "For setup instructions, please see the Installation and Environment Setup sections of the [Anthropic Platform page](/docs/integrations/platforms/anthropic.mdx)."
+ "To access Anthropic models you'll need to create an Anthropic account, get an API key, and install the `langchain-anthropic` integration package.\n",
+ "\n",
+ "### Credentials\n",
+ "\n",
+ "Head to https://console.anthropic.com/ to sign up for Anthropic and generate an API key. Once you've done this set the ANTHROPIC_API_KEY environment variable:"
]
},
{
"cell_type": "code",
"execution_count": null,
- "id": "91be2e12",
+ "id": "433e8d2b-9519-4b49-b2c4-7ab65b046c94",
"metadata": {},
"outputs": [],
"source": [
- "%pip install -qU langchain-anthropic"
+ "import getpass\n",
+ "import os\n",
+ "\n",
+ "os.environ[\"anthropic_API_KEY\"] = getpass.getpass(\"Enter your Anthropic API key: \")"
]
},
{
"cell_type": "markdown",
- "id": "584ed5ec",
+ "id": "72ee0c4b-9764-423a-9dbf-95129e185210",
"metadata": {},
"source": [
- "## Environment Setup\n",
- "\n",
- "We'll need to get an [Anthropic](https://console.anthropic.com/settings/keys) API key and set the `ANTHROPIC_API_KEY` environment variable:"
+ "If you want to get automated tracing of your model calls you can also set your [LangSmith](https://docs.smith.langchain.com/) API key by uncommenting below:"
]
},
{
"cell_type": "code",
- "execution_count": 2,
- "id": "01578ae3",
+ "execution_count": null,
+ "id": "a15d341e-3e26-4ca3-830b-5aab30ed66de",
"metadata": {},
"outputs": [],
"source": [
- "import os\n",
- "from getpass import getpass\n",
- "\n",
- "os.environ[\"ANTHROPIC_API_KEY\"] = getpass()"
+ "# os.environ[\"LANGSMITH_API_KEY\"] = getpass.getpass(\"Enter your LangSmith API key: \")\n",
+ "# os.environ[\"LANGSMITH_TRACING\"] = \"true\""
]
},
{
"cell_type": "markdown",
- "id": "d1f9df276476f0bc",
- "metadata": {
- "collapsed": false,
- "jupyter": {
- "outputs_hidden": false
- }
- },
+ "id": "0730d6a1-c893-4840-9817-5e5251676d5d",
+ "metadata": {},
"source": [
- "The code provided assumes that your ANTHROPIC_API_KEY is set in your environment variables. If you would like to manually specify your API key and also choose a different model, you can use the following code:\n",
- "```python\n",
- "chat = ChatAnthropic(temperature=0, api_key=\"YOUR_API_KEY\", model_name=\"claude-3-opus-20240229\")\n",
- "\n",
- "```\n",
- "\n",
- "In these demos, we will use the Claude 3 Opus model, and you can also use the launch version of the Sonnet model with `claude-3-sonnet-20240229`.\n",
+ "### Installation\n",
"\n",
- "You can check the model comparison doc [here](https://docs.anthropic.com/claude/docs/models-overview#model-comparison)."
+ "The LangChain Anthropic integration lives in the `langchain-anthropic` package:"
]
},
{
"cell_type": "code",
- "execution_count": 1,
- "id": "238bdbaa-526a-4130-89e9-523aa44bb196",
+ "execution_count": null,
+ "id": "652d6238-1f87-422a-b135-f5abbb8652fc",
"metadata": {},
"outputs": [],
"source": [
- "from langchain_anthropic import ChatAnthropic\n",
- "from langchain_core.prompts import ChatPromptTemplate"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": 5,
- "id": "8199ef8f-eb8b-4253-9ea0-6c24a013ca4c",
- "metadata": {
- "ExecuteTime": {
- "end_time": "2024-01-19T11:25:07.274418Z",
- "start_time": "2024-01-19T11:25:05.898031Z"
- },
- "tags": []
- },
- "outputs": [
- {
- "data": {
- "text/plain": [
- "AIMessage(content='저는 파이썬을 사랑합니다.\\n\\nTranslation:\\nI love Python.')"
- ]
- },
- "execution_count": 5,
- "metadata": {},
- "output_type": "execute_result"
- }
- ],
- "source": [
- "chat = ChatAnthropic(temperature=0, model_name=\"claude-3-opus-20240229\")\n",
- "\n",
- "system = (\n",
- " \"You are a helpful assistant that translates {input_language} to {output_language}.\"\n",
- ")\n",
- "human = \"{text}\"\n",
- "prompt = ChatPromptTemplate.from_messages([(\"system\", system), (\"human\", human)])\n",
- "\n",
- "chain = prompt | chat\n",
- "chain.invoke(\n",
- " {\n",
- " \"input_language\": \"English\",\n",
- " \"output_language\": \"Korean\",\n",
- " \"text\": \"I love Python\",\n",
- " }\n",
- ")"
- ]
- },
- {
- "cell_type": "markdown",
- "id": "c361ab1e-8c0c-4206-9e3c-9d1424a12b9c",
- "metadata": {},
- "source": [
- "## `ChatAnthropic` also supports async and streaming functionality:"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": 5,
- "id": "c5fac0e9-05a4-4fc1-a3b3-e5bbb24b971b",
- "metadata": {
- "ExecuteTime": {
- "end_time": "2024-01-19T11:25:10.448733Z",
- "start_time": "2024-01-19T11:25:08.866277Z"
- },
- "tags": []
- },
- "outputs": [
- {
- "data": {
- "text/plain": [
- "AIMessage(content='Sure, here\\'s a joke about a bear:\\n\\nA bear walks into a bar and says to the bartender, \"I\\'ll have a pint of beer and a.......... packet of peanuts.\"\\n\\nThe bartender asks, \"Why the big pause?\"\\n\\nThe bear replies, \"I don\\'t know, I\\'ve always had them!\"')"
- ]
- },
- "execution_count": 5,
- "metadata": {},
- "output_type": "execute_result"
- }
- ],
- "source": [
- "chat = ChatAnthropic(temperature=0, model_name=\"claude-3-opus-20240229\")\n",
- "prompt = ChatPromptTemplate.from_messages([(\"human\", \"Tell me a joke about {topic}\")])\n",
- "chain = prompt | chat\n",
- "await chain.ainvoke({\"topic\": \"bear\"})"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": 6,
- "id": "025be980-e50d-4a68-93dc-c9c7b500ce34",
- "metadata": {
- "ExecuteTime": {
- "end_time": "2024-01-19T11:25:24.438696Z",
- "start_time": "2024-01-19T11:25:14.687480Z"
- },
- "tags": []
- },
- "outputs": [
- {
- "name": "stdout",
- "output_type": "stream",
- "text": [
- "Here is a list of famous tourist attractions in Japan:\n",
- "\n",
- "1. Tokyo Skytree (Tokyo)\n",
- "2. Senso-ji Temple (Tokyo)\n",
- "3. Meiji Shrine (Tokyo)\n",
- "4. Tokyo DisneySea (Urayasu, Chiba)\n",
- "5. Fushimi Inari Taisha (Kyoto)\n",
- "6. Kinkaku-ji (Golden Pavilion) (Kyoto)\n",
- "7. Kiyomizu-dera (Kyoto)\n",
- "8. Nijo Castle (Kyoto)\n",
- "9. Osaka Castle (Osaka)\n",
- "10. Dotonbori (Osaka)\n",
- "11. Hiroshima Peace Memorial Park (Hiroshima)\n",
- "12. Itsukushima Shrine (Miyajima Island, Hiroshima)\n",
- "13. Himeji Castle (Himeji)\n",
- "14. Todai-ji Temple (Nara)\n",
- "15. Nara Park (Nara)\n",
- "16. Mount Fuji (Shizuoka and Yamanashi Prefectures)\n",
- "17."
- ]
- }
- ],
- "source": [
- "chat = ChatAnthropic(temperature=0.3, model_name=\"claude-3-opus-20240229\")\n",
- "prompt = ChatPromptTemplate.from_messages(\n",
- " [(\"human\", \"Give me a list of famous tourist attractions in Japan\")]\n",
- ")\n",
- "chain = prompt | chat\n",
- "for chunk in chain.stream({}):\n",
- " print(chunk.content, end=\"\", flush=True)"
+ "%pip install -qU langchain-anthropic"
]
},
{
"cell_type": "markdown",
- "id": "ab0174d8-7140-413c-80a9-7cf3a8b81bb4",
+ "id": "a38cde65-254d-4219-a441-068766c0d4b5",
"metadata": {},
"source": [
- "## [Beta] Tool-calling\n",
- "\n",
- "With Anthropic's [tool-calling, or tool-use, API](https://docs.anthropic.com/claude/docs/functions-external-tools), you can define tools for the model to invoke. This is extremely useful for building tool-using chains and agents, as well as for getting structured outputs from a model.\n",
- "\n",
- ":::note\n",
- "\n",
- "Anthropic's tool-calling functionality is still in beta.\n",
- "\n",
- ":::\n",
- "\n",
- "### bind_tools()\n",
+ "## Instantiation\n",
"\n",
- "With `ChatAnthropic.bind_tools`, we can easily pass in Pydantic classes, dict schemas, LangChain tools, or even functions as tools to the model. Under the hood these are converted to an Anthropic tool schemas, which looks like:\n",
- "```\n",
- "{\n",
- " \"name\": \"...\",\n",
- " \"description\": \"...\",\n",
- " \"input_schema\": {...} # JSONSchema\n",
- "}\n",
- "```\n",
- "and passed in every model invocation."
+ "Now we can instantiate our model object and generate chat completions:"
]
},
{
"cell_type": "code",
- "execution_count": 3,
- "id": "42f87466-cb8e-490d-a9f8-aa0f8e9b4217",
+ "execution_count": 1,
+ "id": "cb09c344-1836-4e0c-acf8-11d13ac1dbae",
"metadata": {},
"outputs": [],
"source": [
- "from langchain_core.pydantic_v1 import BaseModel, Field\n",
- "\n",
- "llm = ChatAnthropic(model=\"claude-3-opus-20240229\", temperature=0)\n",
- "\n",
- "\n",
- "class GetWeather(BaseModel):\n",
- " \"\"\"Get the current weather in a given location\"\"\"\n",
- "\n",
- " location: str = Field(..., description=\"The city and state, e.g. San Francisco, CA\")\n",
- "\n",
+ "from langchain_anthropic import ChatAnthropic\n",
"\n",
- "llm_with_tools = llm.bind_tools([GetWeather])"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": 4,
- "id": "997be6ff-3fd3-4b1c-b7e3-2e5fed4ac964",
- "metadata": {},
- "outputs": [
- {
- "data": {
- "text/plain": [
- "AIMessage(content=[{'text': '\\nThe user is asking about the current weather in a specific location, San Francisco. The relevant tool to answer this is the GetWeather function.\\n\\nLooking at the parameters for GetWeather:\\n- location (required): The user directly provided the location in the query - \"San Francisco\"\\n\\nSince the required \"location\" parameter is present, we can proceed with calling the GetWeather function.\\n', 'type': 'text'}, {'id': 'toolu_01StzxdWQSZhAMbR1CCchQV9', 'input': {'location': 'San Francisco, CA'}, 'name': 'GetWeather', 'type': 'tool_use'}], response_metadata={'id': 'msg_01HepCTzqXJed5iNuLgV1VCZ', 'model': 'claude-3-opus-20240229', 'stop_reason': 'tool_use', 'stop_sequence': None, 'usage': {'input_tokens': 487, 'output_tokens': 143}}, id='run-1a1b3289-ba2c-47ae-8be1-8929d7cc547e-0', tool_calls=[{'name': 'GetWeather', 'args': {'location': 'San Francisco, CA'}, 'id': 'toolu_01StzxdWQSZhAMbR1CCchQV9'}])"
- ]
- },
- "execution_count": 4,
- "metadata": {},
- "output_type": "execute_result"
- }
- ],
- "source": [
- "ai_msg = llm_with_tools.invoke(\n",
- " \"what is the weather like in San Francisco\",\n",
- ")\n",
- "ai_msg"
+ "llm = ChatAnthropic(\n",
+ " model=\"claude-3-sonnet-20240229\",\n",
+ " temperature=0,\n",
+ " max_tokens=1024,\n",
+ " timeout=None,\n",
+ " max_retries=2,\n",
+ " # other params...\n",
+ ")"
]
},
{
"cell_type": "markdown",
- "id": "1e63ac67-8c42-4468-8178-e54f13c3c5c3",
+ "id": "2b4f3e15",
"metadata": {},
"source": [
- "Notice that the output message content is a list that contains a text block and then a tool_use block:"
+ "## Invocation\n"
]
},
{
"cell_type": "code",
- "execution_count": 5,
- "id": "7c4cd4c4-1c78-4d6c-8607-759e32a8903b",
- "metadata": {},
+ "execution_count": 2,
+ "id": "62e0dbc3",
+ "metadata": {
+ "tags": []
+ },
"outputs": [
{
"data": {
"text/plain": [
- "[{'text': '\\nThe user is asking about the current weather in a specific location, San Francisco. The relevant tool to answer this is the GetWeather function.\\n\\nLooking at the parameters for GetWeather:\\n- location (required): The user directly provided the location in the query - \"San Francisco\"\\n\\nSince the required \"location\" parameter is present, we can proceed with calling the GetWeather function.\\n',\n",
- " 'type': 'text'},\n",
- " {'id': 'toolu_01StzxdWQSZhAMbR1CCchQV9',\n",
- " 'input': {'location': 'San Francisco, CA'},\n",
- " 'name': 'GetWeather',\n",
- " 'type': 'tool_use'}]"
+ "AIMessage(content=\"Voici la traduction en français :\\n\\nJ'aime la programmation.\", response_metadata={'id': 'msg_013qztabaFADNnKsHR1rdrju', 'model': 'claude-3-sonnet-20240229', 'stop_reason': 'end_turn', 'stop_sequence': None, 'usage': {'input_tokens': 29, 'output_tokens': 21}}, id='run-a22ab30c-7e09-48f5-bc27-a08a9d8f7fa1-0', usage_metadata={'input_tokens': 29, 'output_tokens': 21, 'total_tokens': 50})"
]
},
- "execution_count": 5,
+ "execution_count": 2,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
- "ai_msg.content"
- ]
- },
- {
- "cell_type": "markdown",
- "id": "d446bd0f-06cc-4aa6-945d-74335d5a8780",
- "metadata": {},
- "source": [
- "Crucially, the tool calls are also extracted into the `tool_calls` where they are in a standardized, model-agnostic format:"
+ "messages = [\n",
+ " (\n",
+ " \"system\",\n",
+ " \"You are a helpful assistant that translates English to French. Translate the user sentence.\",\n",
+ " ),\n",
+ " (\"human\", \"I love programming.\"),\n",
+ "]\n",
+ "ai_msg = llm.invoke(messages)\n",
+ "ai_msg"
]
},
{
"cell_type": "code",
- "execution_count": 7,
- "id": "e36f254e-bb89-4978-9351-a463b13eb3c7",
+ "execution_count": 3,
+ "id": "d86145b3-bfef-46e8-b227-4dda5c9c2705",
"metadata": {},
"outputs": [
{
- "data": {
- "text/plain": [
- "[{'name': 'GetWeather',\n",
- " 'args': {'location': 'San Francisco, CA'},\n",
- " 'id': 'toolu_01StzxdWQSZhAMbR1CCchQV9'}]"
- ]
- },
- "execution_count": 7,
- "metadata": {},
- "output_type": "execute_result"
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "Voici la traduction en français :\n",
+ "\n",
+ "J'aime la programmation.\n"
+ ]
}
],
"source": [
- "ai_msg.tool_calls"
+ "print(ai_msg.content)"
]
},
{
"cell_type": "markdown",
- "id": "90e015e0-c6e5-4ff5-8fb9-be0cd3c86395",
+ "id": "18e2bfc0-7e78-4528-a73f-499ac150dca8",
"metadata": {},
"source": [
- ":::tip\n",
- "\n",
- "ChatAnthropic model outputs are always a single AI message that can have either a single string or a list of content blocks. The content blocks can be text blocks or tool-duse blocks. There can be multiple of each and they can be interspersed.\n",
+ "## Chaining\n",
"\n",
- ":::"
- ]
- },
- {
- "cell_type": "markdown",
- "id": "b5145dea-0183-4cab-b9e2-0e35fb8370cf",
- "metadata": {},
- "source": [
- "### Forcing tool calls\n",
- "\n",
- "By default the model can choose whether to call any tools. To force the model to call at least one tool we can specify `bind_tools(..., tool_choice=\"any\")` and to force the model to call a specific tool we can pass in that tool name `bind_tools(..., tool_choice=\"GetWeather\")`"
+ "We can [chain](/docs/how_to/sequence/) our model with a prompt template like so:"
]
},
{
"cell_type": "code",
"execution_count": 4,
- "id": "05993626-060c-449f-8069-e52d31442977",
+ "id": "e197d1d7-a070-4c96-9f8a-a0e86d046e0b",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
- "[{'name': 'GetWeather',\n",
- " 'args': {'location': ''},\n",
- " 'id': 'toolu_01DwWjKzHPs6EHCUPxsGm9bN'}]"
+ "AIMessage(content='Ich liebe Programmieren.', response_metadata={'id': 'msg_01FWrA8w9HbjqYPTQ7VryUnp', 'model': 'claude-3-sonnet-20240229', 'stop_reason': 'end_turn', 'stop_sequence': None, 'usage': {'input_tokens': 23, 'output_tokens': 11}}, id='run-b749bf20-b46d-4d62-ac73-f59adab6dd7e-0', usage_metadata={'input_tokens': 23, 'output_tokens': 11, 'total_tokens': 34})"
]
},
"execution_count": 4,
@@ -397,295 +217,114 @@
}
],
"source": [
- "llm_with_force_tools = llm.bind_tools([GetWeather], tool_choice=\"GetWeather\")\n",
- "# Notice the model will still return tool calls despite a message that\n",
- "# doesn't have anything to do with the tools.\n",
- "llm_with_force_tools.invoke(\"this doesn't really require tool use\").tool_calls"
- ]
- },
- {
- "cell_type": "markdown",
- "id": "8652ee98-814c-4ed6-9def-275eeaa9651e",
- "metadata": {},
- "source": [
- "### Parsing tool calls\n",
+ "from langchain_core.prompts import ChatPromptTemplate\n",
"\n",
- "The `langchain_anthropic.output_parsers.ToolsOutputParser` makes it easy to parse the tool calls from an Anthropic AI message into Pydantic objects if we'd like:"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": 8,
- "id": "59c175b1-0929-4ed4-a608-f0006031a3c2",
- "metadata": {},
- "outputs": [],
- "source": [
- "from langchain_anthropic.output_parsers import ToolsOutputParser"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": 16,
- "id": "08f6c62c-923b-400e-9bc8-8aff417466b2",
- "metadata": {},
- "outputs": [
- {
- "data": {
- "text/plain": [
- "[GetWeather(location='New York City, NY'),\n",
- " GetWeather(location='Los Angeles, CA'),\n",
- " GetWeather(location='San Francisco, CA'),\n",
- " GetWeather(location='Cleveland, OH')]"
- ]
- },
- "execution_count": 16,
- "metadata": {},
- "output_type": "execute_result"
- }
- ],
- "source": [
- "parser = ToolsOutputParser(pydantic_schemas=[GetWeather])\n",
- "chain = llm_with_tools | parser\n",
- "chain.invoke(\"What is the weather like in nyc, la, sf and cleveland\")"
- ]
- },
- {
- "cell_type": "markdown",
- "id": "ab05dd51-0a9e-4b7b-b182-65cec44941ac",
- "metadata": {},
- "source": [
- "### with_structured_output()\n",
+ "prompt = ChatPromptTemplate.from_messages(\n",
+ " [\n",
+ " (\n",
+ " \"system\",\n",
+ " \"You are a helpful assistant that translates {input_language} to {output_language}.\",\n",
+ " ),\n",
+ " (\"human\", \"{input}\"),\n",
+ " ]\n",
+ ")\n",
"\n",
- "The [BaseChatModel.with_structured_output interface](/docs/how_to/structured_output) makes it easy to get structured output from chat models. You can use `ChatAnthropic.with_structured_output`, which uses tool-calling under the hood), to get the model to more reliably return an output in a specific format:"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": 18,
- "id": "e047b831-2338-4c2d-9ee4-0763f74e80e1",
- "metadata": {},
- "outputs": [
- {
- "data": {
- "text/plain": [
- "GetWeather(location='San Francisco, CA')"
- ]
- },
- "execution_count": 18,
- "metadata": {},
- "output_type": "execute_result"
- }
- ],
- "source": [
- "structured_llm = llm.with_structured_output(GetWeather)\n",
- "structured_llm.invoke(\n",
- " \"what is the weather like in San Francisco\",\n",
+ "chain = prompt | llm\n",
+ "chain.invoke(\n",
+ " {\n",
+ " \"input_language\": \"English\",\n",
+ " \"output_language\": \"German\",\n",
+ " \"input\": \"I love programming.\",\n",
+ " }\n",
")"
]
},
{
"cell_type": "markdown",
- "id": "2d74b83e-bcd3-47e6-911e-82b5dcfbd20e",
+ "id": "d1ee55bc-ffc8-4cfa-801c-993953a08cfd",
"metadata": {},
"source": [
- "The main difference between using \n",
- "```python\n",
- "llm.with_structured_output(GetWeather)\n",
- "``` \n",
- "vs \n",
+ "## Content blocks\n",
"\n",
- "```python\n",
- "llm.bind_tools([GetWeather]) | ToolsOutputParser(pydantic_schemas=[GetWeather])\n",
- "``` \n",
- "is that it will return only the first GetWeather call, whereas the second approach will return a list."
- ]
- },
- {
- "cell_type": "markdown",
- "id": "5b61884e-3e4e-4145-b10d-188987ae1eb6",
- "metadata": {},
- "source": [
- "### Passing tool results to model\n",
- "\n",
- "We can use `ToolMessage`s with the appropriate `tool_call_id`s to pass tool results back to the model:"
+ "One key difference to note between Anthropic models and most others is that the contents of a single Anthropic AI message can either be a single string or a **list of content blocks**. For example when an Anthropic model invokes a tool, the tool invocation is part of the message content (as well as being exposed in the standardized `AIMessage.tool_calls`):"
]
},
{
"cell_type": "code",
- "execution_count": 5,
- "id": "9d07a1c1-4542-440e-a1fb-392542267fb8",
+ "execution_count": 10,
+ "id": "4a374a24-2534-4e6f-825b-30fab7bbe0cb",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
- "AIMessage(content='Based on calling the GetWeather function, the weather in San Francisco, CA is:\\nRain with a high temperature of 54°F and winds from the southwest at 15-25 mph. There is a 100% chance of rain.', response_metadata={'id': 'msg_01J7nWVRPPTgae4eDpf9yR3M', 'model': 'claude-3-opus-20240229', 'stop_reason': 'end_turn', 'stop_sequence': None, 'usage': {'input_tokens': 670, 'output_tokens': 56}}, id='run-44fcd34f-9c24-464f-94dd-63bd0d22870d-0')"
+ "[{'text': \"Okay, let's use the GetWeather tool to check the current temperatures in Los Angeles and New York City.\",\n",
+ " 'type': 'text'},\n",
+ " {'id': 'toolu_01Tnp5tL7LJZaVyQXKEjbqcC',\n",
+ " 'input': {'location': 'Los Angeles, CA'},\n",
+ " 'name': 'GetWeather',\n",
+ " 'type': 'tool_use'}]"
]
},
- "execution_count": 5,
+ "execution_count": 10,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
- "from langchain_core.messages import AIMessage, HumanMessage, ToolMessage\n",
+ "from langchain_core.pydantic_v1 import BaseModel, Field\n",
"\n",
- "messages = [\n",
- " HumanMessage(\"What is the weather like in San Francisco\"),\n",
- " AIMessage(\n",
- " content=[\n",
- " {\n",
- " \"text\": '\\nBased on the user\\'s question, the relevant function to call is GetWeather, which requires the \"location\" parameter.\\n\\nThe user has directly specified the location as \"San Francisco\". Since San Francisco is a well known city, I can reasonably infer they mean San Francisco, CA without needing the state specified.\\n\\nAll the required parameters are provided, so I can proceed with the API call.\\n',\n",
- " \"type\": \"text\",\n",
- " },\n",
- " {\n",
- " \"type\": \"tool_use\",\n",
- " \"id\": \"toolu_01SCgExKzQ7eqSkMHfygvYuu\",\n",
- " \"name\": \"GetWeather\",\n",
- " \"input\": {\"location\": \"San Francisco, CA\"},\n",
- " \"text\": None,\n",
- " },\n",
- " ],\n",
- " ),\n",
- " ToolMessage(\n",
- " \"Rain. High 54F. Winds SW at 15 to 25 mph. Chance of rain 100%.\",\n",
- " tool_call_id=\"toolu_01SCgExKzQ7eqSkMHfygvYuu\",\n",
- " ),\n",
- "]\n",
- "llm_with_tools.invoke(messages)"
- ]
- },
- {
- "cell_type": "markdown",
- "id": "1c82d198-77ce-4d5a-a65b-a98fd3c10740",
- "metadata": {},
- "source": [
- "### Streaming\n",
"\n",
- "::: {.callout-warning}\n",
+ "class GetWeather(BaseModel):\n",
+ " \"\"\"Get the current weather in a given location\"\"\"\n",
"\n",
- "Anthropic does not currently support streaming tool calls. Attempting to stream will yield a single final message.\n",
+ " location: str = Field(..., description=\"The city and state, e.g. San Francisco, CA\")\n",
"\n",
- ":::"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": 8,
- "id": "d1284ddc-eb82-44be-b034-5046809536de",
- "metadata": {},
- "outputs": [
- {
- "name": "stderr",
- "output_type": "stream",
- "text": [
- "/Users/bagatur/langchain/libs/partners/anthropic/langchain_anthropic/chat_models.py:328: UserWarning: stream: Tool use is not yet supported in streaming mode.\n",
- " warnings.warn(\"stream: Tool use is not yet supported in streaming mode.\")\n"
- ]
- },
- {
- "data": {
- "text/plain": [
- "[AIMessage(content=[{'text': '\\nThe user is asking for the current weather in a specific location, San Francisco. The GetWeather function is the relevant tool to answer this request, as it returns the current weather for a given location.\\n\\nThe GetWeather function has one required parameter:\\nlocation: The city and state, e.g. San Francisco, CA\\n\\nThe user provided the city San Francisco in their request. They did not specify the state, but it can be reasonably inferred that they are referring to San Francisco, California since that is the most well known city with that name.\\n\\nSince the required location parameter has been provided by the user, we can proceed with calling the GetWeather function.\\n', 'type': 'text'}, {'text': None, 'type': 'tool_use', 'id': 'toolu_01V9ZripoQzuY8HubspJy6fP', 'name': 'GetWeather', 'input': {'location': 'San Francisco, CA'}}], id='run-b825206b-5b6b-48bc-ad8d-802dee310c7f')]"
- ]
- },
- "execution_count": 8,
- "metadata": {},
- "output_type": "execute_result"
- }
- ],
- "source": [
- "list(llm_with_tools.stream(\"What's the weather in san francisco\"))"
- ]
- },
- {
- "cell_type": "markdown",
- "id": "70d5e0fb",
- "metadata": {},
- "source": [
- "## Multimodal\n",
"\n",
- "Anthropic's Claude-3 models are compatible with both image and text inputs. You can use this as follows:"
+ "llm_with_tools = llm.bind_tools([GetWeather])\n",
+ "ai_msg = llm_with_tools.invoke(\"Which city is hotter today: LA or NY?\")\n",
+ "ai_msg.content"
]
},
{
"cell_type": "code",
- "execution_count": 1,
- "id": "3e9d1ab5",
+ "execution_count": 11,
+ "id": "6b4a1ead-952c-489f-a8d4-355d3fb55f3f",
"metadata": {},
"outputs": [
{
"data": {
- "text/html": [
- ""
- ],
"text/plain": [
- ""
+ "[{'name': 'GetWeather',\n",
+ " 'args': {'location': 'Los Angeles, CA'},\n",
+ " 'id': 'toolu_01Tnp5tL7LJZaVyQXKEjbqcC'}]"
]
},
- "execution_count": 1,
+ "execution_count": 11,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
- "# open ../../../static/img/brand/wordmark.png as base64 str\n",
- "import base64\n",
- "from pathlib import Path\n",
- "\n",
- "from IPython.display import HTML\n",
- "\n",
- "img_path = Path(\"../../../static/img/brand/wordmark.png\")\n",
- "img_base64 = base64.b64encode(img_path.read_bytes()).decode(\"utf-8\")\n",
- "\n",
- "# display b64 image in notebook\n",
- "HTML(f'')"
+ "ai_msg.tool_calls"
]
},
{
- "cell_type": "code",
- "execution_count": 6,
- "id": "b6bb2aa2",
+ "cell_type": "markdown",
+ "id": "3a5bb5ca-c3ae-4a58-be67-2cd18574b9a3",
"metadata": {},
- "outputs": [
- {
- "data": {
- "text/plain": [
- "AIMessage(content='This logo is for LangChain, which appears to be some kind of software or technology platform based on the name and minimalist design style of the logo featuring a silhouette of a bird (likely an eagle or hawk) and the company name in a simple, modern font.')"
- ]
- },
- "execution_count": 6,
- "metadata": {},
- "output_type": "execute_result"
- }
- ],
"source": [
- "from langchain_core.messages import HumanMessage\n",
+ "## API reference\n",
"\n",
- "chat = ChatAnthropic(model=\"claude-3-opus-20240229\")\n",
- "messages = [\n",
- " HumanMessage(\n",
- " content=[\n",
- " {\n",
- " \"type\": \"image_url\",\n",
- " \"image_url\": {\n",
- " # langchain logo\n",
- " \"url\": f\"data:image/png;base64,{img_base64}\",\n",
- " },\n",
- " },\n",
- " {\"type\": \"text\", \"text\": \"What is this logo for?\"},\n",
- " ]\n",
- " )\n",
- "]\n",
- "chat.invoke(messages)"
+ "For detailed documentation of all ChatAnthropic features and configurations head to the API reference: https://api.python.langchain.com/en/latest/chat_models/langchain_anthropic.chat_models.ChatAnthropic.html"
]
}
],
"metadata": {
"kernelspec": {
- "display_name": "poetry-venv-2",
+ "display_name": "Python 3 (ipykernel)",
"language": "python",
- "name": "poetry-venv-2"
+ "name": "python3"
},
"language_info": {
"codemirror_mode": {
diff --git a/libs/cli/langchain_cli/integration_template/docs/chat.ipynb b/libs/cli/langchain_cli/integration_template/docs/chat.ipynb
index 0a6d77fa048e3..da6d64feb1116 100644
--- a/libs/cli/langchain_cli/integration_template/docs/chat.ipynb
+++ b/libs/cli/langchain_cli/integration_template/docs/chat.ipynb
@@ -32,7 +32,7 @@
"\n",
"| Class | Package | Local | Serializable | [JS support](https://js.langchain.com/v0.2/docs/integrations/chat/__package_name_short_snake__) | Package downloads | Package latest |\n",
"| :--- | :--- | :---: | :---: | :---: | :---: | :---: |\n",
- "| [Chat__ModuleName__](https://api.python.langchain.com/en/latest/chat_models/__module_name__.chat_models.Chat__ModuleName__.html) | [__package__name__](https://api.python.langchain.com/en/latest/__package_name_short_snake___api_reference.html) | ✅/❌ | beta/❌ | ✅/❌ | ![PyPI - Downloads](https://img.shields.io/pypi/dm/__package_name__?style=flat-square&label=%20) | ![PyPI - Version](https://img.shields.io/pypi/v/__package_name__?style=flat-square&label=%20) |\n",
+ "| [Chat__ModuleName__](https://api.python.langchain.com/en/latest/chat_models/__module_name__.chat_models.Chat__ModuleName__.html) | [__package_name__](https://api.python.langchain.com/en/latest/__package_name_short_snake___api_reference.html) | ✅/❌ | beta/❌ | ✅/❌ | ![PyPI - Downloads](https://img.shields.io/pypi/dm/__package_name__?style=flat-square&label=%20) | ![PyPI - Version](https://img.shields.io/pypi/v/__package_name__?style=flat-square&label=%20) |\n",
"\n",
"### Model features\n",
"| [Tool calling](/docs/how_to/tool_calling/) | [Structured output](/docs/how_to/structured_output/) | JSON mode | [Image input](/docs/how_to/multimodal_inputs/) | Audio input | Video input | [Token-level streaming](/docs/how_to/chat_streaming/) | Native async | [Token usage](/docs/how_to/chat_token_usage_tracking/) | [Logprobs](/docs/how_to/logprobs/) |\n",
diff --git a/libs/cli/langchain_cli/integration_template/integration_template/chat_models.py b/libs/cli/langchain_cli/integration_template/integration_template/chat_models.py
index d5951404e8165..ed5134e763e2a 100644
--- a/libs/cli/langchain_cli/integration_template/integration_template/chat_models.py
+++ b/libs/cli/langchain_cli/integration_template/integration_template/chat_models.py
@@ -35,11 +35,11 @@ class Chat__ModuleName__(BaseChatModel):
# TODO: Populate with relevant params.
Key init args — client params:
- timeout:
+ timeout: Optional[float]
Timeout for requests.
- max_retries:
+ max_retries: int
Max number of retries.
- api_key:
+ api_key: Optional[str]
__ModuleName__ API key. If not passed in will be read from env var __MODULE_NAME___API_KEY.
See full list of supported init args and their descriptions in the params section.
diff --git a/libs/partners/anthropic/langchain_anthropic/chat_models.py b/libs/partners/anthropic/langchain_anthropic/chat_models.py
index 9988f38b1234b..91a6e31a2f65e 100644
--- a/libs/partners/anthropic/langchain_anthropic/chat_models.py
+++ b/libs/partners/anthropic/langchain_anthropic/chat_models.py
@@ -228,18 +228,239 @@ def _format_messages(messages: List[BaseMessage]) -> Tuple[Optional[str], List[D
class ChatAnthropic(BaseChatModel):
- """Anthropic chat model.
+ """Anthropic chat model integration.
- To use, you should have the environment variable ``ANTHROPIC_API_KEY``
- set with your API key, or pass it as a named parameter to the constructor.
+ See https://docs.anthropic.com/en/docs/models-overview for a list of the latest models.
- Example:
+ Setup:
+ Install ``langchain-anthropic`` and set environment variable ``ANTHROPIC_API_KEY``.
+
+ .. code-block:: bash
+
+ pip install -U langchain-anthropic
+ export ANTHROPIC_API_KEY="your-api-key"
+
+ Key init args — completion params:
+ model: str
+ Name of Anthropic model to use. E.g. "claude-3-sonnet-20240229".
+ temperature: float
+ Sampling temperature. Ranges from 0.0 to 1.0.
+ max_tokens: Optional[int]
+ Max number of tokens to generate.
+
+ Key init args — client params:
+ timeout: Optional[float]
+ Timeout for requests.
+ max_retries: int
+ Max number of retries if a request fails.
+ api_key: Optional[str]
+ Anthropic API key. If not passed in will be read from env var ANTHROPIC_API_KEY.
+ base_url: Optional[str]
+ Base URL for API requests. Only specify if using a proxy or service
+ emulator.
+
+ See full list of supported init args and their descriptions in the params section.
+
+ Instantiate:
.. code-block:: python
from langchain_anthropic import ChatAnthropic
- model = ChatAnthropic(model='claude-3-opus-20240229')
- """
+ llm = ChatAnthropic(
+ model="claude-3-sonnet-20240229",
+ temperature=0,
+ max_tokens=1024,
+ timeout=None,
+ max_retries=2,
+ # api_key="...",
+ # base_url="...",
+ # other params...
+ )
+
+ **NOTE**: Any param which is not explicitly supported will be passed directly to the
+ ``anthropic.Anthropic.messages.create(...)`` API every time to the model is
+ invoked. For example:
+ .. code-block:: python
+
+ from langchain_anthropic import ChatAnthropic
+ import anthropic
+
+ ChatAnthropic(..., extra_headers={}).invoke(...)
+
+ # results in underlying API call of:
+
+ anthropic.Anthropic(..).messages.create(..., extra_headers={})
+
+ # which is also equivalent to:
+
+ ChatAnthropic(...).invoke(..., extra_headers={})
+
+ Invoke:
+ .. code-block:: python
+
+ messages = [
+ ("system", "You are a helpful translator. Translate the user sentence to French."),
+ ("human", "I love programming."),
+ ]
+ llm.invoke(messages)
+
+ .. code-block:: python
+
+ AIMessage(content="J'aime la programmation.", response_metadata={'id': 'msg_01Trik66aiQ9Z1higrD5XFx3', 'model': 'claude-3-sonnet-20240229', 'stop_reason': 'end_turn', 'stop_sequence': None, 'usage': {'input_tokens': 25, 'output_tokens': 11}}, id='run-5886ac5f-3c2e-49f5-8a44-b1e92808c929-0', usage_metadata={'input_tokens': 25, 'output_tokens': 11, 'total_tokens': 36})
+
+ Stream:
+ .. code-block:: python
+
+ for chunk in llm.stream(messages):
+ print(chunk)
+
+ .. code-block:: python
+
+ AIMessageChunk(content='J', id='run-272ff5f9-8485-402c-b90d-eac8babc5b25')
+ AIMessageChunk(content="'", id='run-272ff5f9-8485-402c-b90d-eac8babc5b25')
+ AIMessageChunk(content='a', id='run-272ff5f9-8485-402c-b90d-eac8babc5b25')
+ AIMessageChunk(content='ime', id='run-272ff5f9-8485-402c-b90d-eac8babc5b25')
+ AIMessageChunk(content=' la', id='run-272ff5f9-8485-402c-b90d-eac8babc5b25')
+ AIMessageChunk(content=' programm', id='run-272ff5f9-8485-402c-b90d-eac8babc5b25')
+ AIMessageChunk(content='ation', id='run-272ff5f9-8485-402c-b90d-eac8babc5b25')
+ AIMessageChunk(content='.', id='run-272ff5f9-8485-402c-b90d-eac8babc5b25')
+
+ .. code-block:: python
+
+ stream = llm.stream(messages)
+ full = next(stream)
+ for chunk in stream:
+ full += chunk
+ full
+
+ .. code-block:: python
+
+ AIMessageChunk(content="J'aime la programmation.", id='run-b34faef0-882f-4869-a19c-ed2b856e6361')
+
+ Async:
+ .. code-block:: python
+
+ await llm.ainvoke(messages)
+
+ # stream:
+ # async for chunk in (await llm.astream(messages))
+
+ # batch:
+ # await llm.abatch([messages])
+
+ .. code-block:: python
+
+ AIMessage(content="J'aime la programmation.", response_metadata={'id': 'msg_01Trik66aiQ9Z1higrD5XFx3', 'model': 'claude-3-sonnet-20240229', 'stop_reason': 'end_turn', 'stop_sequence': None, 'usage': {'input_tokens': 25, 'output_tokens': 11}}, id='run-5886ac5f-3c2e-49f5-8a44-b1e92808c929-0', usage_metadata={'input_tokens': 25, 'output_tokens': 11, 'total_tokens': 36})
+
+ Tool calling:
+ .. code-block:: python
+
+ from langchain_core.pydantic_v1 import BaseModel, Field
+
+ class GetWeather(BaseModel):
+ '''Get the current weather in a given location'''
+
+ location: str = Field(..., description="The city and state, e.g. San Francisco, CA")
+
+ class GetPopulation(BaseModel):
+ '''Get the current population in a given location'''
+
+ location: str = Field(..., description="The city and state, e.g. San Francisco, CA")
+
+ llm_with_tools = llm.bind_tools([GetWeather, GetPopulation])
+ ai_msg = llm_with_tools.invoke("Which city is hotter today and which is bigger: LA or NY?")
+ ai_msg.tool_calls
+
+ .. code-block:: python
+
+ [{'name': 'GetWeather',
+ 'args': {'location': 'Los Angeles, CA'},
+ 'id': 'toolu_01KzpPEAgzura7hpBqwHbWdo'},
+ {'name': 'GetWeather',
+ 'args': {'location': 'New York, NY'},
+ 'id': 'toolu_01JtgbVGVJbiSwtZk3Uycezx'},
+ {'name': 'GetPopulation',
+ 'args': {'location': 'Los Angeles, CA'},
+ 'id': 'toolu_01429aygngesudV9nTbCKGuw'},
+ {'name': 'GetPopulation',
+ 'args': {'location': 'New York, NY'},
+ 'id': 'toolu_01JPktyd44tVMeBcPPnFSEJG'}]
+
+ See ``ChatAnthropic.bind_tools()`` method for more.
+
+ Structured output:
+ .. code-block:: python
+
+ from typing import Optional
+
+ from langchain_core.pydantic_v1 import BaseModel, Field
+
+ class Joke(BaseModel):
+ '''Joke to tell user.'''
+
+ setup: str = Field(description="The setup of the joke")
+ punchline: str = Field(description="The punchline to the joke")
+ rating: Optional[int] = Field(description="How funny the joke is, from 1 to 10")
+
+ structured_llm = llm.with_structured_output(Joke)
+ structured_llm.invoke("Tell me a joke about cats")
+
+ .. code-block:: python
+
+ Joke(setup='Why was the cat sitting on the computer?', punchline='To keep an eye on the mouse!', rating=None)
+
+ See ``ChatAnthropic.with_structured_output()`` for more.
+
+ Image input:
+ .. code-block:: python
+
+ import base64
+ import httpx
+ from langchain_core.messages import HumanMessage
+
+ image_url = "https://upload.wikimedia.org/wikipedia/commons/thumb/d/dd/Gfp-wisconsin-madison-the-nature-boardwalk.jpg/2560px-Gfp-wisconsin-madison-the-nature-boardwalk.jpg"
+ image_data = base64.b64encode(httpx.get(image_url).content).decode("utf-8")
+ message = HumanMessage(
+ content=[
+ {"type": "text", "text": "describe the weather in this image"},
+ {
+ "type": "image_url",
+ "image_url": {"url": f"data:image/jpeg;base64,{image_data}"},
+ },
+ ],
+ )
+ ai_msg = llm.invoke([message])
+ ai_msg.content
+
+ .. code-block:: python
+
+ "The image depicts a sunny day with a partly cloudy sky. The sky is a brilliant blue color with scattered white clouds drifting across. The lighting and cloud patterns suggest pleasant, mild weather conditions. The scene shows a grassy field or meadow with a wooden boardwalk trail leading through it, indicating an outdoor setting on a nice day well-suited for enjoying nature."
+
+ Token usage:
+ .. code-block:: python
+
+ ai_msg = llm.invoke(messages)
+ ai_msg.usage_metadata
+
+ .. code-block:: python
+
+ {'input_tokens': 25, 'output_tokens': 11, 'total_tokens': 36}
+
+ Response metadata
+ .. code-block:: python
+
+ ai_msg = llm.invoke(messages)
+ ai_msg.response_metadata
+
+ .. code-block:: python
+
+ {'id': 'msg_013xU6FHEGEq76aP4RgFerVT',
+ 'model': 'claude-3-sonnet-20240229',
+ 'stop_reason': 'end_turn',
+ 'stop_sequence': None,
+ 'usage': {'input_tokens': 25, 'output_tokens': 11}}
+
+ """ # noqa: E501
class Config:
"""Configuration for this pydantic object."""
@@ -271,7 +492,12 @@ class Config:
max_retries: int = 2
"""Number of retries allowed for requests sent to the Anthropic Completion API."""
- anthropic_api_url: Optional[str] = None
+ anthropic_api_url: Optional[str] = Field(None, alias="base_url")
+ """Base URL for API requests. Only specify if using a proxy or service emulator.
+
+ If a value isn't passed in and environment variable ANTHROPIC_BASE_URL is set, value
+ will be read from there.
+ """
anthropic_api_key: Optional[SecretStr] = Field(None, alias="api_key")
"""Automatically read from env var `ANTHROPIC_API_KEY` if not provided."""
@@ -353,6 +579,7 @@ def validate_environment(cls, values: Dict) -> Dict:
api_url = (
values.get("anthropic_api_url")
or os.environ.get("ANTHROPIC_API_URL")
+ or os.environ.get("ANTHROPIC_BASE_URL")
or "https://api.anthropic.com"
)
values["anthropic_api_url"] = api_url
diff --git a/libs/partners/anthropic/tests/unit_tests/test_chat_models.py b/libs/partners/anthropic/tests/unit_tests/test_chat_models.py
index 1b8968d1a177a..31d12db98cef4 100644
--- a/libs/partners/anthropic/tests/unit_tests/test_chat_models.py
+++ b/libs/partners/anthropic/tests/unit_tests/test_chat_models.py
@@ -25,16 +25,18 @@
def test_initialization() -> None:
"""Test chat model initialization."""
for model in [
- ChatAnthropic(model_name="claude-instant-1.2", api_key="xyz", timeout=2), # type: ignore[arg-type]
+ ChatAnthropic(model_name="claude-instant-1.2", api_key="xyz", timeout=2), # type: ignore[arg-type, call-arg]
ChatAnthropic( # type: ignore[call-arg, call-arg, call-arg]
model="claude-instant-1.2",
anthropic_api_key="xyz",
default_request_timeout=2,
+ base_url="https://api.anthropic.com",
),
]:
assert model.model == "claude-instant-1.2"
assert cast(SecretStr, model.anthropic_api_key).get_secret_value() == "xyz"
assert model.default_request_timeout == 2.0
+ assert model.anthropic_api_url == "https://api.anthropic.com"
@pytest.mark.requires("anthropic")
diff --git a/libs/partners/openai/langchain_openai/chat_models/base.py b/libs/partners/openai/langchain_openai/chat_models/base.py
index 3861f264040b5..0257893d3ad73 100644
--- a/libs/partners/openai/langchain_openai/chat_models/base.py
+++ b/libs/partners/openai/langchain_openai/chat_models/base.py
@@ -1141,16 +1141,16 @@ class ChatOpenAI(BaseChatOpenAI):
streaming (``{"include_usage": True}``).
Key init args — client params:
- timeout:
+ timeout: Union[float, Tuple[float, float], Any, None]
Timeout for requests.
- max_retries:
+ max_retries: int
Max number of retries.
- api_key:
+ api_key: Optional[str]
OpenAI API key. If not passed in will be read from env var OPENAI_API_KEY.
- base_url:
- Base URL for PAI requests. Only specify if using a proxy or service
+ base_url: Optional[str]
+ Base URL for API requests. Only specify if using a proxy or service
emulator.
- organization:
+ organization: Optional[str]
OpenAI organization ID. If not passed in will be read from env
var OPENAI_ORG_ID.