This repo provides a docker setup to run the LangChain research-assistant template using langserve.
-
Example LangSmith traces
docker run -d --name langchain-research-assistant-docker \
-e OPENAI_API_KEY=sk-... \
-e TAVILY_API_KEY=tvly-... \
-e LANGCHAIN_API_KEY=ls__... \
-e LANGCHAIN_TRACING_V2=true \
-e LANGCHAIN_PROJECT=langchain-research-assistant-docker \
-p 8000:8000 \
joshuasundance/langchain-research-assistant-docker:latest
version: '3.8'
services:
langchain-research-assistant-docker:
image: joshuasundance/langchain-research-assistant-docker:latest
container_name: langchain-research-assistant-docker
environment: # use values from .env
- "OPENAI_API_KEY=${OPENAI_API_KEY:?OPENAI_API_KEY is not set}" # required
- "TAVILY_API_KEY=${TAVILY_API_KEY}" # optional
- "LANGCHAIN_API_KEY=${LANGCHAIN_API_KEY}" # optional
- "LANGCHAIN_TRACING_V2=${LANGCHAIN_TRACING_V2:-false}" # false by default
- "LANGCHAIN_PROJECT=${LANGCHAIN_PROJECT:-langchain-research-assistant-docker}"
ports:
- "${APP_PORT:-8000}:8000"
The following assumes you have a .env
file and a Kubernetes cluster running and kubectl
configured to access it.
It creates a secret called research-assistant-secret
. To use a different name, edit ./kubernetes/resources.yaml
as well.
You can also edit the file and uncomment certain lines to deploy on private endpoints, with a predefined IP, etc.
kubectl create secret generic research-assistant-secret --from-env-file=.env
kubectl apply -f ./kubernetes/resources.yaml
All deployment options are flexible and configurable.
-
The service will be available at
http://localhost:8000
. -
You can access the OpenAPI documentation at
http://localhost:8000/docs
andhttp://localhost:8000/openapi.json
. -
Access the Research Playground at
http://127.0.0.1:8000/research-assistant/playground/
. -
You can also use the
RemoteRunnable
class to interact with the service:
from langserve.client import RemoteRunnable
runnable = RemoteRunnable("http://localhost:8000/research-assistant")
See the LangChain docs for more information.