From fb009beeb3dccdea832c6c2fc61d01ed451f39ba Mon Sep 17 00:00:00 2001 From: Chester Curme Date: Mon, 16 Dec 2024 15:33:29 -0500 Subject: [PATCH] add admonition to custom llm guide --- docs/docs/how_to/custom_llm.ipynb | 8 +++++++- 1 file changed, 7 insertions(+), 1 deletion(-) diff --git a/docs/docs/how_to/custom_llm.ipynb b/docs/docs/how_to/custom_llm.ipynb index d56380332f4b3..8accfd3040755 100644 --- a/docs/docs/how_to/custom_llm.ipynb +++ b/docs/docs/how_to/custom_llm.ipynb @@ -9,10 +9,16 @@ "\n", "This notebook goes over how to create a custom LLM wrapper, in case you want to use your own LLM or a different wrapper than one that is supported in LangChain.\n", "\n", - "Wrapping your LLM with the standard `LLM` interface allow you to use your LLM in existing LangChain programs with minimal code modifications!\n", + "Wrapping your LLM with the standard `LLM` interface allow you to use your LLM in existing LangChain programs with minimal code modifications.\n", "\n", "As an bonus, your LLM will automatically become a LangChain `Runnable` and will benefit from some optimizations out of the box, async support, the `astream_events` API, etc.\n", "\n", + ":::caution\n", + "You are currently on a page documenting the use of [text completion models](/docs/concepts/text_llms). Many of the latest and most popular models are [chat completion models](/docs/concepts/chat_models).\n", + "\n", + "Unless you are specifically using more advanced prompting techniques, you are probably looking for [this page instead](/docs/how_to/custom_chat_model/).\n", + ":::\n", + "\n", "## Implementation\n", "\n", "There are only two required things that a custom LLM needs to implement:\n",