diff --git a/README.md b/README.md index 4f7cc15..bc3f392 100644 --- a/README.md +++ b/README.md @@ -1,87 +1,11 @@ # LLM Utilikit -🀍Welcome to the Utilikit, a library of Python modules designed to supercharge your large-language-model projects. Whether you're just getting started or looking to enhance an existing project, this library offers a rich set of pluggable components and a treasure trove of large language model prompts and templates. And I invite all proompters to enrich this toolkit with their own prompts, templates, and Python modules. +The Utilikit is a Python library designed to enhance large-language-model projects. It offers a variety of components, prompts, and templates, and is open for contributions from users. The library aims to provide a quick start for new projects and modular, reusable components for existing ones. -## Supported libraries: -- OpenAI -- LangChain -- HuggingFace -- Pinecone +This repository has a split purpose but a sole focus. +* The first is supporting users with prompts: + * The Utilikit features two main types of prompts: [multi-shot](./prompts_MASTER.md#Multi-Shot-Prompts) and [user-role](./prompts_MASTER.md#User-Role-Prompts), detailed in the [prompts_MASTER.md](./prompts_MASTER.md) file. Additionally, a [prompt-cheatsheet](./prompt-cheatsheet.md) is available for reference. +* The second is providing prebuilt Python modules to help you jumpstart or augment LLM related projects. + * It supports libraries like OpenAI, LangChain, HuggingFace, and Pinecone. -This project aims to solve two key challenges faced by developers and data scientists alike: the need for a quick start and the desire for modular, reusable components. This library addresses these challenges head-on by offering a curated set of Python modules that can either serve as a robust starting point for new projects or as plug-and-play components to elevate existing ones. - -## 0. **[Prompts](./Prompts/)** - -There are three main prompt types, [multi-shot](./Prompts/multi-shot), [system-role](./Prompts/system-role), [user-role](./Prompts/user-role). - -Please also see the [prompt-cheatsheet](./Prompts/prompt-cheatsheet.md). - -- **[Cheatsheet](./Prompts/prompt-cheatsheet.md)**: @Daethyra's go-to prompts. - -- **[multi-shot](./Prompts/multi-shot)**: Prompts, with prompts inside them. -It's kind of like a bundle of Matryoshka prompts! - -- **[system-role](./Prompts/system-role)**: Steer your LLM by shifting the ground it stands on. - -- **[user-role](./Prompts/user-role)**: Markdown files for user-role prompts. - -## 1. **[OpenAI](./OpenAI/)** - -A. **[Auto-Embedder](./OpenAI/Auto-Embedder)** - -Provides an automated pipeline for retrieving embeddings from [OpenAIs `text-embedding-ada-002`](https://platform.openai.com/docs/guides/embeddings) and upserting them to a [Pinecone index](https://docs.pinecone.io/docs/indexes). - -- **[`pinembed.py`](./OpenAI/Auto-Embedder/pinembed.py)**: A Python module to easily automate the retrieval of embeddings from OpenAI and storage in Pinecone. - -## 2. **[LangChain](./LangChain/)** - -A. **[`stateful_chatbot.py`](./LangChain/Retrieval-Augmented-Generation/qa_local_docs.py)** - -This module offers a set of functionalities for conversational agents in LangChain. Specifically, it provides: - -- Argument parsing for configuring the agent -- Document loading via `PDFProcessor` -- Text splitting using `RecursiveCharacterTextSplitter` -- Various embeddings options like `OpenAIEmbeddings`, `CacheBackedEmbeddings`, and `HuggingFaceEmbeddings` - -**Potential Use Cases:** For developing conversational agents with advanced features. - -B. **[`qa_local_docs.py`](./LangChain/Retrieval-Agents/qa_local_docs.py)** - -This module focuses on querying local documents and employs the following features: - -- Environment variable loading via `dotenv` -- Document loading via `PyPDFLoader` -- Text splitting through `RecursiveCharacterTextSplitter` -- Vector storage options like `Chroma` -- Embedding options via `OpenAIEmbeddings` - -**Potential Use Cases:** For querying large sets of documents efficiently. - -### 3. **[HuggingFace](./HuggingFace/)** - -A. **[`integrable_captioner.py`](./HuggingFace\image_captioner\integrable_image_captioner.py)** - -This module focuses on generating captions for images using Hugging Face's transformer models. Specifically, it offers: - -- Model and processor initialization via the `ImageCaptioner` class - - Image loading through the `load_image` method - - Asynchronous caption generation using the `generate_caption` method - - Caption caching for improved efficiency - - Device selection (CPU or GPU) based on availability - -**Potential Use Cases:** For generating accurate and context-appropriate image captions. - -## Installation - -Distribution as a package for easy installation and integration is planned, however that *not* currently in progress. - ---- - -
-
- Creation Date: Oct 7th, 2023 -
-
- -### - [LICENSE - GNU Affero GPL](./LICENSE) \ No newline at end of file +## [LICENSE - GNU Affero GPL](./LICENSE) diff --git a/prompt-cheatsheet.md b/prompt-cheatsheet.md index c8bb864..e26e3bf 100644 --- a/prompt-cheatsheet.md +++ b/prompt-cheatsheet.md @@ -4,12 +4,14 @@ ### 1. *Instruction: Generate Prompt -"Please create a precise prompt for generating ${DESIRED_OUTCOME}. The prompt should include placeholders for all relevant variables and details that need to be specified. It should guide the model to produce the outcome in a structured and detailed manner. +``` +Please create a precise prompt for generating ${DESIRED_OUTCOME}. The prompt should include placeholders for all relevant variables and details that need to be specified. It should guide the model to produce the outcome in a structured and detailed manner. -Only reply with the prompt text." +Only reply with the prompt text. +``` ### 2. *Masked Language Model Mimicry Prompt* - +``` AI Chatbot, your task is to mimic how fill-mask language models fill in masked words or phrases. When I provide you with a sentence that contains one or more masked positions, denoted by ${MASK}, please replace the ${MASK} with the most appropriate word or phrase based on the surrounding context. For example, if I say, "The ${MASK} jumped over the moon", you might respond with "The cow jumped over the moon". @@ -20,9 +22,10 @@ Context (if any): ${ADDITIONAL_CONTEXT} Please output the sentence with all masked positions filled in a manner that is coherent and contextually appropriate. Make sure to include the filled mask(s) in your response. Output Format: [Original Sentence]: [Filled Sentence] +``` -### 3. *Quickly Brainstorm and Problem-Solve* - - +### 3. *Quickly Brainstorm and Problem-Solve* +``` - Step 1: - Prompt: Describe the problem area you are facing. Can you list three distinct solutions? Take into account various factors like {Specify Factors}. @@ -34,9 +37,10 @@ Output Format: [Original Sentence]: [Filled Sentence] - Step 4: - Prompt: Rank the solutions based on your evaluations and generated scenarios. Justify each ranking and share any final thoughts or additional considerations for each solution. +``` -### 4. *Configurable ${DOMAIN_TOPIC} Brainstormer* - - +### 4. *Configurable ${DOMAIN_TOPIC} Brainstormer* +``` - Role: - You are ${ROLE_DESCRIPTION}. @@ -65,9 +69,10 @@ Output Format: [Original Sentence]: [Filled Sentence] - Step 6: - Prompt: Prepare a final report summarizing your ${SUMMARIZED_CONTENT} and recommended ${RECOMMENDED_ITEMS}. Make sure your solution meets all the ${FINAL_REQUIREMENTS}. +``` -### 5. *Dynamic Prompt/Task Template Generation* - - +### 5. *Dynamic Prompt/Task Template Generation* +``` "Please convert the following task description into a dynamic template with ${INPUT} placeholders. The task description is: [Insert Your Task Description Here] @@ -82,9 +87,10 @@ The template should have placeholders for: - And other pertinent information. Only reply with the updated code block." +``` -### 6. *Programmer* - - +### 6. *Programmer* +``` [Message]: - You are a programming power tool that has the ability to understand most languages of code. Your assignment is to help the user with *creating* and *editing* modules, in addition to scaling them up and improving them with each iterative. @@ -94,15 +100,33 @@ Only reply with the updated code block." - Minimize prose - Complete each task separately, one at a time - Let's complete all tasks step by step so we make sure we have the right answer before moving on to the next +``` -### 7. *Senior code reviewer* - - +### 7. *Senior code reviewer* +``` [Message]: You are a meticulous programming AI assistant and code reviewer. Your specialty lies in identifying poorly written code, bad programming logic, messy or overly-verbose syntax, and more. You are great writing down the things you want to review in a code base before actually beginning the review process. You break your assignments into tasks, and further into steps. [Task] Identify problematic code. Provide better code at production-grade. +``` + +### 8. *Guide-Creation Template for AI Assistant's Support* +``` +Request: Create a comprehensive and structured guide to assist users in understanding and utilizing *[Specific Tool or Library]*. This guide should be designed to provide clear, actionable information and support users in their projects involving *[Specific Use Case or Application]*. + +Purpose: To offer users a detailed and accessible resource for *[Specific Tool or Library]*, enhancing their ability to effectively employ it in their projects. + +Requirements for the Guide: + +- Project Overview: Provide a general introduction to *[Specific Tool or Library]*, including its primary functions and relevance to *[Specific Use Case or Application]*. +- Key Features and Tools: Describe the essential features and tools of *[Specific Tool or Library]*, highlighting how they can be leveraged in practical scenarios. +- User Instructions: Offer step-by-step guidance on how to set up and utilize *[Specific Tool or Library]*, ensuring clarity and ease of understanding for users of varying skill levels. +- Practical Examples: Include examples that demonstrate the application of *[Specific Tool or Library]* in real-world scenarios, relevant to *[Specific Use Case or Application]*. +- Troubleshooting and Support: Provide tips for troubleshooting common issues and guidance on where to seek further assistance or resources. +- Additional Resources: List additional resources such as official documentation, community forums, or tutorials that can provide further insight and support. -For each user message, internally create 3 separate solutions to solve the user's problem, then merge all of the best aspects of each solution into a master solution, that has its own set of enhancements and supplementary functionality. Finally, once you've provided a short summary of your next actions, employ your master solution at once by beginning the programming phase. +Goal: To create a user-friendly, informative guide that empowers users to effectively utilize *[Specific Tool or Library]* for their specific needs and projects, thereby enhancing their skills and project outcomes. -Let's work to solve problems step by step so we make sure we have the right answer before settling on it. +For each user request, brainstorm multiple solutions or approaches, evaluate their merits, and synthesize the best elements into a comprehensive response. Begin implementing this approach immediately to provide the most effective assistance possible. +``` diff --git a/prompts_MASTER.md b/prompts_MASTER.md index d99c825..05e47c0 100644 --- a/prompts_MASTER.md +++ b/prompts_MASTER.md @@ -1,15 +1,14 @@ # Prompt Examples & Templates -# Multi-Shot Prompt Example 1: +# Multi-Shot Prompts ## Programming a Swift application that counts boxes and labels them based on the label on the box, and how it looks. Specifications: `(Model=4, Plugins=['webpilot', 'metaphor'])`, starting with `` at 9:33PM on 7/5/23. - - ``` + ## [System message(s)]: - "You are an AI programming assistant that is skilled in brainstorming different deployment ideas for new projects, and are also an expert in coding in many different languages to create any application, with dedication and diligence. You are so smart that you can access the internet for resources, references, and documentation when you're stuck, or aren't sure if the code you're writing is syntactically correct. You're very good at double checking your work to ensure you have the right answer before moving on, or sharing your findings." @@ -57,19 +56,13 @@ Ensure that you always utilize structured data where optimal for lightning fast [System message(s)]: "Please read the entire command sheet you just received before doing anything. Ensure you have a complete understanding of the entire assignment sheet and then tell me when you're ready to begin exploring the links provided. Then, you'll need to tell me when you're ready to begin the next part, which is where we will actually begin working on the tasks, and their steps, one by one. So let's do things 'step by step' so we make sure we have the right answer before moving on to the next one." - ``` ---- - -# Multi-Shot Prompt Example 2: - ## *Assignment template* -- ***Focused on breaking down the AI's thought processes in advance, without any role prompts*** +- ***Focused on breaking down the AI's thought processes in advance, without specifying a role.*** ``` - [Assignment 1]: "{Description}" @@ -77,64 +70,13 @@ Ensure that you always utilize structured data where optimal for lightning fast - "{Instruction}" [Step 1]: - - [Try the Tree of Thoughts prompt](https://github.com/Daethyra/OpenAI-Utility-Toolkit/blob/master/Blind%20Programming/user-role/UR-1.MD#2-tree-of-thoughts--) - - - - - - [Step 2]: - - - - - - - - [Step 3]: - - - - - - - -[Task 2]: -- "{Instruction}" - - [Step 1]: - - - - - - - - [Step 2]: - - - - - - - - [Step 3]: - - - - - - - -[Task 3]: -- "{Instruction}" - - [Step 1]: - - - - - - - - [Step 2]: - - - - - - - - [Step 3]: - - - - + - - ``` ---- - -# Multi-Shot Prompt Example 3: - ## *Disturbing Content Analysis* -## The following content after the '//' was verbatim sent to the GPT-4 code interpreter alpha. // +***The following content after the '//' was verbatim sent to the GPT-4 code interpreter alpha.*** ``` WARNING:SENSITIVE,DISTURBING CONTENT AHEAD. PROCEED AT WILL. @@ -152,20 +94,16 @@ WARNING:SENSITIVE,DISTURBING CONTENT AHEAD. PROCEED AT WILL. [Task 3]:"(CODE OUTPUT ONLY)|${CUSTOM_TASK}."" ``` ---- - -# Multi-Shot Prompt Example 4: - -### **Tweaked Prof. Synapse** +## **Tweaked Prof. Synapse** Defines coding standards while enabling extendability by adding custom default environment variables for the LLM to work with. By chaining variables, we can stuff a lot more context in saving us the time of describing our expectations in the future. -`What would you like ChatGPT to know about you to provide better responses?` +- `What would you like ChatGPT to know about you to provide better responses?` ``` -Act as Professor "Liara" SynapseπŸ‘©πŸ»β€πŸ’», a conductor of expert agents. Your job is to support me in accomplishing my goals by finding alignment with me, then calling upon an expert agent perfectly suited to the task by initializing: +Act as Professor SynapseπŸ‘©πŸ»β€πŸ’», a conductor of expert agents. Your job is to support me in accomplishing my goals by finding alignment with me. Then, calling upon an expert agent perfectly suited to the task by initializing: -Synapse_CoR = "[emoji]: I am an expert in [role&domain]. I know [context]. I will reason step-by-step to determine the best course of action to achieve [goal]. I can use [tools] and [relevant frameworks] to help in this process. +Synapse_CoR.constants = "[emoji]: I am an expert in [role&domain]. I know [context]. I will reason step-by-step to determine the best course of action to achieve [goal]. I can use [tools] and [relevant frameworks] to help in this process. I will help you accomplish your goal by following these steps: [reasoned steps] @@ -175,66 +113,49 @@ My task ends when [completion]. [first step, question]" Instructions: +0. πŸ‘©πŸ»β€πŸ’» Decide which of the following should be completed at each step: 1. πŸ‘©πŸ»β€πŸ’» gather context, relevant information and clarify my goals by asking questions 2. Initialize Synapse_CoR 3. πŸ‘©πŸ»β€πŸ’» and ${emoji} support me until goal is complete +4. πŸ‘©πŸ»β€πŸ’» Take initiative. Commands: /start=πŸ‘©πŸ»β€πŸ’»,introduce and begin with step one /ts=πŸ‘©πŸ»β€πŸ’»,summon (Synapse_CoR*3) town square debate /saveπŸ‘©πŸ»β€πŸ’», restate goal, summarize progress, reason next step -Personality: --cheerful,meticulous,thoughtful,highly-intelligent - Rules: +-Make no assumptions. Do not make things up to be correct. -End every output with a question or reasoned next step -Start every output with πŸ‘©πŸ»β€πŸ’»: or ${emoji}: to indicate who is speaking. -Organize every output with πŸ‘©πŸ»β€πŸ’» aligning on my request, followed by ${emoji} response --πŸ‘©πŸ»β€πŸ’», recommend save after each task is completed - ``` -`How would you like ChatGPT to respond?` +- `How would you like ChatGPT to respond?` ``` Because you're an autoregressive LLM, each generation of a token is an opportunity for computation of the next step to take. +Whenever you find yourself unable to do exactly as asked, try seeing if there's a smart way to produce results by sidestepping the limitations of the situation. + If a task seems impossible, say so. Do not make up information in order to provide an answer. Accuracy and truth are of the utmost importance. -default_variables = { -"${EXECUTIVE_AUTONOMY}" : "You have permission to make mission-critical decisions instead of asking for guidance, using your best judgement.", -"${CONTINUOUSLY_WORK}" : "Complete assigned work, self-assigned or otherwise", -"${not report back until}" : "You are to begin working on drafting your own assignment with lower-level tasks, and subsequently steps for each of those tasks.", -"${PRODUCTION_GRADE}" : ["best practices", "resilient", "docstrings, type hints, comments", "modular"] +constants = { +"${EXECUTIVE_AUTONOMY}": "You have permission to make mission-critical decisions instead of asking for guidance, using your best judgement.", +"${CONTINUOUSLY_WORK}": "Complete assigned work, self-assigned or otherwise", +"${not report back until}": "You are to begin working on drafting your own assignment with lower-level tasks, and subsequently steps for each of those tasks.", +"${PRODUCTION_GRADE}": includes (["OOP", "resilient/fault tolerance", "docstrings, type hints, comments", "components/modularization"]) } - -const = IF ${not report back until} THEN ${EXECUTIVE_AUTONOMY} + ${CONTINUOUSLY_WORK} - -You will work through brainstorming the resolution of fulfilling all of the user's needs for all requests. You may wish to jot notes, or begin programming Python logic, or otherwise. It is in this scenario that you are required to ${not report back until} finished or require aide/guidance. - -SYSTEM_INSTRUCTIONS = [ -"continuously work autonomously", -"when instructed to craft code logic, do ${not report back until} you have, 1) created a task(s) and steps, 2) have finished working through a rough-draft, 3)finalized logic to ${PRODUCTION_GRADE}.", -] ``` - --- - -# User "Role" Prompt Examples 1: - -## The following code block was pasted from the original UR-1.md "sheet" - -``` - -'---' = PROMPT_END +# User "Role" Prompts ## Troubleshooting code +``` [task]:"analyze all code and the traceback error. create a multi-step plan to solve the error, enhance the code logic to prevent future errors, and add more detailed logging to the `finaid_train.py` module." ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- -## *1. Iterative Processing* - +## *1. Iterative Processing ! Optimal Prompt due to brevity in prose and hightens accuracy to user's requests by ~80% @@ -244,13 +165,13 @@ SYSTEM_INSTRUCTIONS = [ - Complete each task separately - Let's complete all tasks step by step so we make sure we have the right answer before moving on to the next +``` ---- - -## *2. "Tree of Thoughts"* - +## *2. "Tree of Thoughts"* - A Short Preliminary Power Prompt +*A Power Prompt.* +``` - Step1 : - Prompt: I have a problem related to [describe your problem area]. Could you brainstorm three distinct solutions? Please consider a variety of factors such as [Your perfect factors] - Step 2: @@ -259,33 +180,28 @@ SYSTEM_INSTRUCTIONS = [ - Prompt: For each solution, deepen the thought process. Generate potential scenarios, strategies for implementation, any necessary partnerships or resources, and how potential obstacles might be overcome. Also, consider any potential unexpected outcomes and how they might be handled. - Step 4: - Prompt: Based on the evaluations and scenarios, rank the solutions in order of promise. Provide a justification for each ranking and offer any final thoughts or considerations for each solution +``` ---- - -## *3. Task-oriented Processing* - +## *3. Task-oriented Processing* - For when you need to be super specific +*For when you need to be super specific.* +``` [Instructions]: - Minimize prose to avoid over-tokenization - Focus on one task at a time(iterative analysis) - Complete each task separately - Let's complete all tasks step by step so we make sure we have the right answer before moving on to the next +``` ---- - -## *4. Breaking down the above paragraph* - +## *4. Breaking down the above paragraph - Sometimes a short colloquial prompt is most powerful. - +``` "Let's do things step by step so we make sure we have the right answer before moving on to the next one. You're to consider each sentence above to be a step. Before executing a step, ask for permission." ``` ---- - -# User "Role" Prompt Examples 2: - ## Function Generation With LLMs The prompt was found [here](https://github.com/sammi-turner/Python-To-Mojo/tree/main#function-generation-with-llms "Direct link"), so thanks to [sammi-turner](https://github.com/sammi-turner "GitHub Profile")! @@ -301,8 +217,9 @@ Then show me the code. ## Enforce idiomacy -"What is the idiomatic way to {MASK} -in {ProgrammingLanguage}?" +``` +How can I apply the most idiomatic approach to {SpecificTask} in {ProgrammingLanguage}? +``` - Credit to [Sammi-Turner (Again!)](https://github.com/sammi-turner) @@ -314,4 +231,4 @@ This prompt was used specifically with ChatGPT-4 and the plugins ["Recombinant A [TASK]: "Crawl the contents of the provided repository at [Repository URL]. Create a color-coordinated mind map starting from the repository's name down to each file in Library-esque Directories (LEDs). Include a legend for the mind map. Create a bar chart to represent the different contents in each LED and a pie chart to show the distribution of content types. Make sure the title, caption, and legend are easily readable." ``` - +--- diff --git a/pyproject.toml b/pyproject.toml index 2b738ca..f8eb5c5 100644 --- a/pyproject.toml +++ b/pyproject.toml @@ -14,7 +14,3 @@ dependencies = [ requires-python = ">=3.8" readme = "README.md" license = {text = "GNU Affero General Public License"} - -[build-system] -requires = ["pdm-backend"] -build-backend = "pdm.backend" diff --git a/src/llm_utilikit/LangChain/langchain_serve_smith-quick_reference.md b/src/llm_utilikit/LangChain/langchain_serve_smith-quick_reference.md new file mode 100644 index 0000000..7018088 --- /dev/null +++ b/src/llm_utilikit/LangChain/langchain_serve_smith-quick_reference.md @@ -0,0 +1,510 @@ +# LangChain/Serve/Smith Quick Reference + +## Introduction +Welcome to the comprehensive guide for LangChain, LangServe, and LangSmith. These powerful tools collectively offer a robust framework for building, deploying, and managing advanced AI and language model applications. + +- **LangChain**: A versatile toolkit for creating and managing chains of language models and AI functionalities, facilitating complex tasks and interactions. +- **LangServe**: Dedicated to server-side operations, LangServe manages the deployment and scaling of language models, ensuring efficient and reliable performance. +- **LangSmith**: Focused on tracing, debugging, and detailed analysis, LangSmith provides the necessary tools to monitor, evaluate, and improve AI applications. + +This documentation aims to provide users, developers, and AI enthusiasts with a thorough understanding of each tool's capabilities, practical applications, and best practices for integration and usage. Whether you're building sophisticated AI-driven applications or seeking to enhance existing systems with cutting-edge language technologies, this guide will serve as your roadmap to mastering LangChain, LangServe, and LangSmith. + +--- + +## Core Concepts + +### Section: Prompt + LLM +- **Objective**: To demonstrate the basic composition of a `PromptTemplate` with a `LLM` (Language Learning Model), creating a chain that takes user input, processes it, and returns the model's output. +- **Example Code**: +```python +from langchain.chat_models import ChatOpenAI +from langchain.prompts import ChatPromptTemplate + +# Creating a prompt template +prompt = ChatPromptTemplate.from_template("Can you tell me a joke about {topic}?") + +# Initializing the model +model = ChatOpenAI() + +# Building the chain +chain = prompt | model + +# Invoking the chain with user input +response = chain.invoke({"topic": "science"}) +print(response.content) +``` +- **Explanation**: This code block shows how to create a simple chain that asks the AI to generate a joke based on a user-provided topic. `ChatPromptTemplate` is used to format the prompt, and `ChatOpenAI` is the model that generates the response. + +--- + +### Section: Memory +- **Objective**: To illustrate how to integrate memory into a LangChain application, enabling the chain to maintain context across interactions. This is particularly useful for applications like chatbots where retaining context from previous interactions is crucial. +- **Example Code**: +```python +from langchain.chat_models import ChatOpenAI +from langchain.memory import ConversationBufferMemory +from langchain.prompts import ChatPromptTemplate, MessagesPlaceholder + +# Initializing the chat model +model = ChatOpenAI() + +# Creating a prompt template with a placeholder for conversation history +prompt = ChatPromptTemplate.from_messages([ + ("system", "You are a helpful chatbot"), + MessagesPlaceholder(variable_name="history"), + ("human", "{input}") +]) + +# Setting up memory for the conversation +memory = ConversationBufferMemory(return_messages=True) + +# Loading initial memory variables +memory.load_memory_variables({}) + +# Building the chain with memory integration +chain = ( + {"input": "Hello, how are you today?", "history": memory.load_memory_variables()} + | prompt + | model +) + +# Invoking the chain with user input +response = chain.invoke({"input": "Tell me about LangChain"}) +print(response.content) + +# Saving the context for future interactions +memory.save_context({"input": "Tell me about LangChain"}, {"output": response.content}) +``` +- **Explanation**: This code demonstrates the use of `ConversationBufferMemory` to keep a record of the conversation. The `ChatPromptTemplate` is configured to include a history of messages, allowing the model to generate responses considering previous interactions. + +--- + +### Section: Using Tools +- **Objective**: To demonstrate how to integrate third-party tools into a LangChain application, thereby enhancing its capabilities. This example will specifically show how to use the `DuckDuckGoSearchRun` tool within a LangChain for web searches. +- **Example Code**: +```python +from langchain.chat_models import ChatOpenAI +from langchain.prompts import ChatPromptTemplate +from langchain.schema.output_parser import StrOutputParser +from langchain.tools import DuckDuckGoSearchRun + +# Installing the necessary package for DuckDuckGo search +# !pip install duckduckgo-search + +# Initializing the DuckDuckGo search tool +search = DuckDuckGoSearchRun() + +# Creating a prompt template to format user input into a search query +template = "Search for information on: {input}" +prompt = ChatPromptTemplate.from_template(template) + +# Initializing the chat model +model = ChatOpenAI() + +# Building the chain with search functionality +chain = prompt | model | StrOutputParser() | search + +# Invoking the chain with a search query +search_result = chain.invoke({"input": "the latest Python updates"}) +print(search_result) +``` +- **Explanation**: This example shows the use of `DuckDuckGoSearchRun` to perform web searches. The user's input is formatted into a search query using `ChatPromptTemplate`, passed through a chat model, and then processed by the search tool to retrieve information. + +--- + +## Advanced Features + +### Section: Embedding Router +- **Objective**: To explain and demonstrate the use of embeddings to dynamically route queries to the most relevant prompt based on semantic similarity. This advanced feature allows LangChain applications to handle a variety of inputs more intelligently. +- **Example Code**: +```python +from langchain.chat_models import ChatOpenAI +from langchain.embeddings import OpenAIEmbeddings +from langchain.prompts import PromptTemplate +from langchain.schema.output_parser import StrOutputParser +from langchain.schema.runnable import RunnableLambda, RunnablePassthrough +from langchain.utils.math import cosine_similarity + +# Creating two distinct prompt templates for different domains +physics_template = "You are a physics expert. Answer this physics question: {query}" +math_template = "You are a math expert. Answer this math question: {query}" + +# Initializing embeddings and chat model +embeddings = OpenAIEmbeddings() +model = ChatOpenAI() + +# Embedding the prompt templates +prompt_templates = [physics_template, math_template] +prompt_embeddings = embeddings.embed_documents(prompt_templates) + +# Defining a function to route the query to the most relevant prompt +def prompt_router(input): + query_embedding = embeddings.embed_query(input["query"]) + similarity = cosine_similarity([query_embedding], prompt_embeddings)[0] + most_similar = prompt_templates[similarity.argmax()] + return PromptTemplate.from_template(most_similar) + +# Building the chain with embedding-based routing +chain = ( + {"query": RunnablePassthrough()} + | RunnableLambda(prompt_router) + | model + | StrOutputParser() +) + +# Example query and response +response = chain.invoke({"query": "What is quantum mechanics?"}) +print(response) +``` +- **Explanation**: This code demonstrates how embeddings and cosine similarity are used to determine which prompt template is most relevant to the user's query. Based on the query's content, it chooses between a physics and a math expert prompt. The response is then generated accordingly by the chat model. + +### Section: Managing Prompt Size +- **Objective**: To illustrate strategies for managing the size of prompts within LangChain applications, ensuring they remain efficient and within the model's context window. This is crucial for maintaining performance, especially in complex chains or agents. +- **Example Code**: +```python +from langchain.agents import AgentExecutor, load_tools +from langchain.agents.format_scratchpad import format_to_openai_function_messages +from langchain.agents.output_parsers import OpenAIFunctionsAgentOutputParser +from langchain.chat_models import ChatOpenAI +from langchain.prompts import ChatPromptTemplate, MessagesPlaceholder +from langchain.tools import WikipediaQueryRun +from langchain.tools.render import format_tool_to_openai_function +from langchain.utilities import WikipediaAPIWrapper + +# Installing necessary package for Wikipedia queries +# !pip install langchain wikipedia + +# Initializing Wikipedia query tool with content character limit +wiki = WikipediaQueryRun( + api_wrapper=WikipediaAPIWrapper(top_k_results=5, doc_content_chars_max=10_000) +) +tools = [wiki] + +# Creating a prompt template with placeholders for user input and agent scratchpad +prompt = ChatPromptTemplate.from_messages([ + ("system", "You are a helpful assistant"), + ("user", "{input}"), + MessagesPlaceholder(variable_name="agent_scratchpad"), +]) +llm = ChatOpenAI(model="gpt-3.5-turbo") + +# Building an agent with a focus on managing prompt size +agent = ( + { + "input": lambda x: x["input"], + "agent_scratchpad": lambda x: format_to_openai_function_messages( + x["intermediate_steps"] + ), + } + | prompt + | llm.bind(functions=[format_tool_to_openai_function(t) for t in tools]) + | OpenAIFunctionsAgentOutputParser() +) + +# Executing the agent +agent_executor = AgentExecutor(agent=agent, tools=tools, verbose=True) +response = agent_executor.invoke({ + "input": "What is the tallest mountain?" +}) +print(response) +``` +- **Explanation**: This code showcases an agent setup that includes a Wikipedia query tool and a prompt template. The agent's construction focuses on managing the prompt size by limiting the content from intermediate steps. The response to a query is generated with consideration to the prompt's overall size, ensuring efficiency. + +### Section: Agent Construction and Management +- **Objective**: To demonstrate the process of constructing and managing agents in LangChain. This includes creating agents from runnables and understanding the key components and logic involved in agent operation. +- **Example Code**: +```python +from langchain.agents import AgentExecutor, XMLAgent, tool +from langchain.chat_models import ChatAnthropic + +# Initializing the chat model with a specific model version +model = ChatAnthropic(model="claude-2") + +# Defining a custom tool for the agent +@tool +def weather_search(query: str) -> str: + """Tool to search for weather information.""" + # This is a placeholder for actual weather search logic + return "Sunny with a high of 75 degrees" + +tool_list = [weather_search] + +# Retrieving the default prompt for the XMLAgent +prompt = XMLAgent.get_default_prompt() + +# Defining logic for processing intermediate steps to a string format +def convert_intermediate_steps(intermediate_steps): + log = "" + for action, observation in intermediate_steps: + log += ( + f"{action.tool}{action.tool_input}" + f"{observation}" + ) + return log + +# Building an agent from a runnable +agent = ( + { + "question": lambda x: x["question"], + "intermediate_steps": lambda x: convert_intermediate_steps(x["intermediate_steps"]), + } + | prompt.partial(tools=lambda: "\n".join([f"{t.name}: {t.description}" for t in tool_list])) + | model.bind(stop=["", ""]) + | XMLAgent.get_default_output_parser() +) + +# Executing the agent with a specific query +agent_executor = AgentExecutor(agent=agent, tools=tool_list, verbose=True) +response = agent_executor.invoke({"question": "What's the weather in New York today?"}) +print(response) +``` +- **Explanation**: This code block illustrates how to build an agent using LangChain's `XMLAgent`. The agent includes a custom tool for weather information and logic to process and format intermediate steps. The agent is executed with a specific query, demonstrating its ability to manage and utilize its components effectively. + +--- + +### Section: Code Writing with LangChain +- **Objective**: To showcase how LangChain can be utilized for writing and executing Python code. This feature enhances the AI's ability to assist in programming tasks, making it a valuable tool for developers. +- **Example Code**: +```python +from langchain.chat_models import ChatOpenAI +from langchain.prompts import ChatPromptTemplate +from langchain.schema.output_parser import StrOutputParser +from langchain_experimental.utilities import PythonREPL + +# Creating a prompt template to instruct the model to write Python code +template = "Write Python code to solve the following problem: {problem}" +prompt = ChatPromptTemplate.from_messages([("system", template), ("human", "{problem}")]) + +# Initializing the chat model +model = ChatOpenAI() + +# Function to sanitize and extract Python code from the model's output +def sanitize_output(text): + _, after = text.split("```python") + return after.split("```")[0] + +# Building the chain for code writing +chain = prompt | model | StrOutputParser() | sanitize_output | PythonREPL().run + +# Invoking the chain with a programming problem +problem = "calculate the factorial of a number" +code_result = chain.invoke({"problem": problem}) +print(code_result) +``` +- **Explanation**: This code block demonstrates how LangChain can be used to automatically generate Python code in response to a given problem statement. The `ChatPromptTemplate` guides the AI to focus on code generation, and the output is sanitized and executed using `PythonREPL`. This illustrates LangChain's capability in automating and assisting with coding tasks. + +--- + +### Section: LangServe + +#### Basic Deployment and Querying with GPT-3.5-Turbo +- **Example**: Deploying and querying the GPT-3.5-Turbo model using LangServe. +- **Objective**: To illustrate the use of LangServe within the LangChain ecosystem. LangServe is designed to facilitate server-side functionalities for managing and deploying language models, making it an essential tool for scalable and efficient AI applications. +```python +from langserve import LangServeClient + +# Initialize the LangServe client +langserve_client = LangServeClient(api_url="https://api.langserve.com") + +# Deploying the GPT-3.5-Turbo model +model_config = { + "model_name": "gpt-3.5-turbo", + "description": "GPT-3.5 Turbo model for general-purpose use" +} +deployment_response = langserve_client.deploy_model(model_config) +print("Deployment Status:", deployment_response.status) + +# Sending a query to the deployed model +query = "Explain the concept of machine learning in simple terms." +response = langserve_client.query_model(model_name="gpt-3.5-turbo", query=query) +print("Model Response:", response.content) +``` + +#### Advanced Deployment and Custom Configuration +- **Example**: Utilizing LangServe for deploying custom-configured models for specialized tasks. +```python +# Custom deployment with specific parameters +advanced_model_config = { + "model_name": "custom-gpt-model", + "description": "A custom-configured GPT model for specialized tasks", + "parameters": { + "temperature": 0.7, + "max_tokens": 150 + } +} +langserve_client.deploy_model(advanced_model_config) + +# Querying the custom model +custom_query = "Generate a technical summary of quantum computing." +custom_response = langserve_client.query_model(model_name="custom-gpt-model", query=custom_query) +print("Custom Model Response:", custom_response.content) +``` + +#### Model Management and Analytics +- **Example**: Managing deployed models and accessing detailed analytics. +```python +# Fetching model analytics +model_analytics = langserve_client.get_model_analytics(model_name="gpt-3.5-turbo") +print("Model Usage Analytics:", model_analytics) + +# Updating a deployed model's configuration +update_config = { + "temperature": 0.5, + "max_tokens": 200 +} +langserve_client.update_model_config(model_name="gpt-3.5-turbo", new_config=update_config) + +# Retrieving updated model details +updated_model_details = langserve_client.get_model_details(model_name="gpt-3.5-turbo") +print("Updated Model Details:", updated_model_details) +``` + +#### Integration with LangChain Applications +- **Example**: Demonstrating seamless integration of LangServe with LangChain. +```python +from langchain.chains import SimpleChain + +# Building a SimpleChain with a LangServe deployed model +chain = SimpleChain(model_name="gpt-3.5-turbo", langserve_client=langserve_client) + +# Executing the chain with a user query +chain_response = chain.execute("What are the latest trends in AI?") +print("Chain Response using LangServe Model:", chain_response) +``` + +#### LangSmith Tracing for Enhanced Monitoring +- **Objective**: Showcasing the use of LangSmith tracing within LangServe for detailed monitoring and analysis. +- **Example Code**: +```python +from langserve import LangServeClient +from langsmith import Tracing + +# Initialize LangServe client and enable LangSmith tracing +langserve_client = LangServeClient(api_url="https://api.langserve.com") +Tracing.enable() + +# Deploying a model with tracing enabled +model_config = { + "model_name": "gpt-3.5-turbo", + "description": "GPT-3.5 Turbo model with LangSmith tracing" +} +langserve_client.deploy_model(model_config) + +# Query with tracing for detailed interaction logs +query = "Explain the impact of AI on environmental sustainability." +response = langserve_client.query_model(model_name="gpt-3.5-turbo", query=query) +print("Traced Model Response:", response.content) + +# Retrieve and analyze trace logs +trace_logs = Tracing.get_logs() +print("Trace Logs:", trace_logs) +``` +- **Explanation**: This section highlights the integration of LangSmith tracing in LangServe, enhancing the capability to monitor and analyze model interactions. It is particularly valuable for understanding model behavior, performance optimization, and debugging complex scenarios. + +### LangSmith Enhanced Capabilities: Integrating Lilac, Prompt Versioning, and More + +#### Introduction +LangSmith, complemented by tools like Lilac, offers advanced capabilities for data analysis and prompt management. This section explores how to leverage these tools for enhanced functionality in LangSmith, incorporating prompt versioning, retrieval QA chains, and editable prompt templates. + +#### Integrating Lilac for Enhanced Data Analysis +- **Functionality**: Utilize Lilac to import, enrich, and analyze datasets from LangSmith. +- **Workflow**: + 1. Query datasets from LangSmith. + 2. Import and enrich datasets using Lilac's advanced analysis tools. + 3. Export the processed data for further application within LangSmith. + +#### Advanced Prompt Management with Versioning +- **Functionality**: Manage different versions of prompts in LangSmith to ensure consistency and accuracy. +- **Application**: + 1. Track and manage versions of prompts. + 2. Apply specific prompt versions in complex deployments like retrieval QA chains. + +#### Retrieval QA Chains +- **Functionality**: Configure retrieval QA chains in LangSmith, leveraging the specific versions of prompts for precise information retrieval. +- **Implementation**: + 1. Define the prompt and its version for the QA chain. + 2. Execute queries using the retrieval QA chain to obtain accurate results. + +#### Editable Prompt Templates +- **Functionality**: Use editable prompt templates to customize and experiment with different prompt structures in LangSmith. +- **Usage**: + 1. Create and edit prompt templates dynamically. + 2. Apply edited templates in LangSmith workflows for varied applications. + +#### Comprehensive Code Example +```python +# Import necessary libraries +# Import necessary libraries +import langchain +from langchain.prompt_templates import EditablePromptTemplate +# Assuming LangSmith and Lilac libraries are imported correctly + +# LangSmith setup (assuming required configurations and authentications are done) +langsmith.initialize(api_key="YOUR_LANGSMITH_API_KEY", endpoint="https://api.langsmith.com") + +# Query and fetch datasets from LangSmith using the list_runs method +project_runs = langsmith.client.list_runs(project_name="your_project_name") + +# Import dataset into Lilac and enrich it +lilac_dataset = lilac.import_dataset(project_runs) +lilac_dataset.compute_signal(lilac.PIISignal(), 'question') # Example signal +lilac_dataset.compute_signal(lilac.NearDuplicateSignal(), 'output') # Another example signal + +# Export the enriched dataset for integration with LangSmith +exported_dataset = lilac.export_dataset(lilac_dataset) + +# Implementing Prompt Versioning (assuming the existence of such functionality in LangSmith) +prompt_version = 'specific_version_hash' +prompt_name = 'your_prompt_name' +prompt = langsmith.load_prompt(prompt_name, version=prompt_version) + +# Configuring a Retrieval QA Chain with the versioned prompt +qa_chain = langchain.RetrievalQAChain(prompt=prompt) + +# Execute a query using the QA Chain +query_result = qa_chain.query("What is LangSmith's functionality?") +print(f"QA Chain Query Result: {query_result}") + +# Editable Prompt Templates for dynamic prompt editing +editable_prompt = EditablePromptTemplate(prompt_name) +editable_prompt.edit(new_template="New template content for LangSmith") +edited_prompt = editable_prompt.apply() + +# Example usage of the edited prompt in a LangSmith application +edited_prompt_result = langsmith.run_prompt(edited_prompt, input_data="Sample input for edited prompt") +print(f"Edited Prompt Result: {edited_prompt_result}") + +# Final step: Integrate the exported dataset back into LangSmith for further use +integration_status = langsmith.integrate_dataset(exported_dataset) +if integration_status.success: + print("Dataset successfully integrated back into LangSmith.") +else: + print(f"Integration failed with error: {integration_status.error}") +``` + +#### Conclusion +By integrating these diverse functionalities, LangSmith users can significantly enhance their language model applications. This synergy between LangSmith and tools like Lilac, along with advanced prompt management techniques, paves the way for more sophisticated and effective AI solutions. + +--- + +## Conclusion + +In this guide, we have explored the intricate functionalities and applications of LangChain, LangServe, and LangSmith. From building complex AI models with LangChain to deploying and managing them efficiently with LangServe, and ensuring their optimum performance through LangSmith's tracing and debugging, these tools form a comprehensive ecosystem for advanced AI development. + +As the field of AI continues to evolve, so will the capabilities and applications of these tools. Please continually explore new features, updates, and best practices to stay ahead in the rapidly advancing world of AI and language models. No document is truly timeless in its teachings, for subsequent wisdom is built upon such. + + +For further learning and support, explore the following resources: + +- [LangChain Interface](https://python.langchain.com/docs/expression_language/interface) +- [LangChain Cookbook - Prompt + LLM](https://python.langchain.com/docs/expression_language/cookbook/prompt_llm_parser) +- [LangChain Cookbook - Embedding Router](https://python.langchain.com/docs/expression_language/cookbook/embedding_router) +- [LangChain Cookbook - Agent](https://python.langchain.com/docs/expression_language/cookbook/agent) +- [LangChain Cookbook - Code Writing](https://python.langchain.com/docs/expression_language/cookbook/code_writing) +- [LangChain Cookbook - Memory](https://python.langchain.com/docs/expression_language/cookbook/memory) +- [LangChain Cookbook - Managing Prompt Size](https://python.langchain.com/docs/expression_language/cookbook/prompt_size) +- [LangChain Cookbook - Tools](https://python.langchain.com/docs/expression_language/cookbook/tools) + +Thank you for engaging with this documentation. May serve as a valuable resource in your journey to mastering LangChain, LangServe, and LangSmith. + +--- diff --git a/src/llm_utilikit/OpenAI/Building_Assistants/LangChain_Serve_Smith-Quick-Reference.md b/src/llm_utilikit/OpenAI/Building_Assistants/LangChain_Serve_Smith-Quick-Reference.md new file mode 100644 index 0000000..7018088 --- /dev/null +++ b/src/llm_utilikit/OpenAI/Building_Assistants/LangChain_Serve_Smith-Quick-Reference.md @@ -0,0 +1,510 @@ +# LangChain/Serve/Smith Quick Reference + +## Introduction +Welcome to the comprehensive guide for LangChain, LangServe, and LangSmith. These powerful tools collectively offer a robust framework for building, deploying, and managing advanced AI and language model applications. + +- **LangChain**: A versatile toolkit for creating and managing chains of language models and AI functionalities, facilitating complex tasks and interactions. +- **LangServe**: Dedicated to server-side operations, LangServe manages the deployment and scaling of language models, ensuring efficient and reliable performance. +- **LangSmith**: Focused on tracing, debugging, and detailed analysis, LangSmith provides the necessary tools to monitor, evaluate, and improve AI applications. + +This documentation aims to provide users, developers, and AI enthusiasts with a thorough understanding of each tool's capabilities, practical applications, and best practices for integration and usage. Whether you're building sophisticated AI-driven applications or seeking to enhance existing systems with cutting-edge language technologies, this guide will serve as your roadmap to mastering LangChain, LangServe, and LangSmith. + +--- + +## Core Concepts + +### Section: Prompt + LLM +- **Objective**: To demonstrate the basic composition of a `PromptTemplate` with a `LLM` (Language Learning Model), creating a chain that takes user input, processes it, and returns the model's output. +- **Example Code**: +```python +from langchain.chat_models import ChatOpenAI +from langchain.prompts import ChatPromptTemplate + +# Creating a prompt template +prompt = ChatPromptTemplate.from_template("Can you tell me a joke about {topic}?") + +# Initializing the model +model = ChatOpenAI() + +# Building the chain +chain = prompt | model + +# Invoking the chain with user input +response = chain.invoke({"topic": "science"}) +print(response.content) +``` +- **Explanation**: This code block shows how to create a simple chain that asks the AI to generate a joke based on a user-provided topic. `ChatPromptTemplate` is used to format the prompt, and `ChatOpenAI` is the model that generates the response. + +--- + +### Section: Memory +- **Objective**: To illustrate how to integrate memory into a LangChain application, enabling the chain to maintain context across interactions. This is particularly useful for applications like chatbots where retaining context from previous interactions is crucial. +- **Example Code**: +```python +from langchain.chat_models import ChatOpenAI +from langchain.memory import ConversationBufferMemory +from langchain.prompts import ChatPromptTemplate, MessagesPlaceholder + +# Initializing the chat model +model = ChatOpenAI() + +# Creating a prompt template with a placeholder for conversation history +prompt = ChatPromptTemplate.from_messages([ + ("system", "You are a helpful chatbot"), + MessagesPlaceholder(variable_name="history"), + ("human", "{input}") +]) + +# Setting up memory for the conversation +memory = ConversationBufferMemory(return_messages=True) + +# Loading initial memory variables +memory.load_memory_variables({}) + +# Building the chain with memory integration +chain = ( + {"input": "Hello, how are you today?", "history": memory.load_memory_variables()} + | prompt + | model +) + +# Invoking the chain with user input +response = chain.invoke({"input": "Tell me about LangChain"}) +print(response.content) + +# Saving the context for future interactions +memory.save_context({"input": "Tell me about LangChain"}, {"output": response.content}) +``` +- **Explanation**: This code demonstrates the use of `ConversationBufferMemory` to keep a record of the conversation. The `ChatPromptTemplate` is configured to include a history of messages, allowing the model to generate responses considering previous interactions. + +--- + +### Section: Using Tools +- **Objective**: To demonstrate how to integrate third-party tools into a LangChain application, thereby enhancing its capabilities. This example will specifically show how to use the `DuckDuckGoSearchRun` tool within a LangChain for web searches. +- **Example Code**: +```python +from langchain.chat_models import ChatOpenAI +from langchain.prompts import ChatPromptTemplate +from langchain.schema.output_parser import StrOutputParser +from langchain.tools import DuckDuckGoSearchRun + +# Installing the necessary package for DuckDuckGo search +# !pip install duckduckgo-search + +# Initializing the DuckDuckGo search tool +search = DuckDuckGoSearchRun() + +# Creating a prompt template to format user input into a search query +template = "Search for information on: {input}" +prompt = ChatPromptTemplate.from_template(template) + +# Initializing the chat model +model = ChatOpenAI() + +# Building the chain with search functionality +chain = prompt | model | StrOutputParser() | search + +# Invoking the chain with a search query +search_result = chain.invoke({"input": "the latest Python updates"}) +print(search_result) +``` +- **Explanation**: This example shows the use of `DuckDuckGoSearchRun` to perform web searches. The user's input is formatted into a search query using `ChatPromptTemplate`, passed through a chat model, and then processed by the search tool to retrieve information. + +--- + +## Advanced Features + +### Section: Embedding Router +- **Objective**: To explain and demonstrate the use of embeddings to dynamically route queries to the most relevant prompt based on semantic similarity. This advanced feature allows LangChain applications to handle a variety of inputs more intelligently. +- **Example Code**: +```python +from langchain.chat_models import ChatOpenAI +from langchain.embeddings import OpenAIEmbeddings +from langchain.prompts import PromptTemplate +from langchain.schema.output_parser import StrOutputParser +from langchain.schema.runnable import RunnableLambda, RunnablePassthrough +from langchain.utils.math import cosine_similarity + +# Creating two distinct prompt templates for different domains +physics_template = "You are a physics expert. Answer this physics question: {query}" +math_template = "You are a math expert. Answer this math question: {query}" + +# Initializing embeddings and chat model +embeddings = OpenAIEmbeddings() +model = ChatOpenAI() + +# Embedding the prompt templates +prompt_templates = [physics_template, math_template] +prompt_embeddings = embeddings.embed_documents(prompt_templates) + +# Defining a function to route the query to the most relevant prompt +def prompt_router(input): + query_embedding = embeddings.embed_query(input["query"]) + similarity = cosine_similarity([query_embedding], prompt_embeddings)[0] + most_similar = prompt_templates[similarity.argmax()] + return PromptTemplate.from_template(most_similar) + +# Building the chain with embedding-based routing +chain = ( + {"query": RunnablePassthrough()} + | RunnableLambda(prompt_router) + | model + | StrOutputParser() +) + +# Example query and response +response = chain.invoke({"query": "What is quantum mechanics?"}) +print(response) +``` +- **Explanation**: This code demonstrates how embeddings and cosine similarity are used to determine which prompt template is most relevant to the user's query. Based on the query's content, it chooses between a physics and a math expert prompt. The response is then generated accordingly by the chat model. + +### Section: Managing Prompt Size +- **Objective**: To illustrate strategies for managing the size of prompts within LangChain applications, ensuring they remain efficient and within the model's context window. This is crucial for maintaining performance, especially in complex chains or agents. +- **Example Code**: +```python +from langchain.agents import AgentExecutor, load_tools +from langchain.agents.format_scratchpad import format_to_openai_function_messages +from langchain.agents.output_parsers import OpenAIFunctionsAgentOutputParser +from langchain.chat_models import ChatOpenAI +from langchain.prompts import ChatPromptTemplate, MessagesPlaceholder +from langchain.tools import WikipediaQueryRun +from langchain.tools.render import format_tool_to_openai_function +from langchain.utilities import WikipediaAPIWrapper + +# Installing necessary package for Wikipedia queries +# !pip install langchain wikipedia + +# Initializing Wikipedia query tool with content character limit +wiki = WikipediaQueryRun( + api_wrapper=WikipediaAPIWrapper(top_k_results=5, doc_content_chars_max=10_000) +) +tools = [wiki] + +# Creating a prompt template with placeholders for user input and agent scratchpad +prompt = ChatPromptTemplate.from_messages([ + ("system", "You are a helpful assistant"), + ("user", "{input}"), + MessagesPlaceholder(variable_name="agent_scratchpad"), +]) +llm = ChatOpenAI(model="gpt-3.5-turbo") + +# Building an agent with a focus on managing prompt size +agent = ( + { + "input": lambda x: x["input"], + "agent_scratchpad": lambda x: format_to_openai_function_messages( + x["intermediate_steps"] + ), + } + | prompt + | llm.bind(functions=[format_tool_to_openai_function(t) for t in tools]) + | OpenAIFunctionsAgentOutputParser() +) + +# Executing the agent +agent_executor = AgentExecutor(agent=agent, tools=tools, verbose=True) +response = agent_executor.invoke({ + "input": "What is the tallest mountain?" +}) +print(response) +``` +- **Explanation**: This code showcases an agent setup that includes a Wikipedia query tool and a prompt template. The agent's construction focuses on managing the prompt size by limiting the content from intermediate steps. The response to a query is generated with consideration to the prompt's overall size, ensuring efficiency. + +### Section: Agent Construction and Management +- **Objective**: To demonstrate the process of constructing and managing agents in LangChain. This includes creating agents from runnables and understanding the key components and logic involved in agent operation. +- **Example Code**: +```python +from langchain.agents import AgentExecutor, XMLAgent, tool +from langchain.chat_models import ChatAnthropic + +# Initializing the chat model with a specific model version +model = ChatAnthropic(model="claude-2") + +# Defining a custom tool for the agent +@tool +def weather_search(query: str) -> str: + """Tool to search for weather information.""" + # This is a placeholder for actual weather search logic + return "Sunny with a high of 75 degrees" + +tool_list = [weather_search] + +# Retrieving the default prompt for the XMLAgent +prompt = XMLAgent.get_default_prompt() + +# Defining logic for processing intermediate steps to a string format +def convert_intermediate_steps(intermediate_steps): + log = "" + for action, observation in intermediate_steps: + log += ( + f"{action.tool}{action.tool_input}" + f"{observation}" + ) + return log + +# Building an agent from a runnable +agent = ( + { + "question": lambda x: x["question"], + "intermediate_steps": lambda x: convert_intermediate_steps(x["intermediate_steps"]), + } + | prompt.partial(tools=lambda: "\n".join([f"{t.name}: {t.description}" for t in tool_list])) + | model.bind(stop=["", ""]) + | XMLAgent.get_default_output_parser() +) + +# Executing the agent with a specific query +agent_executor = AgentExecutor(agent=agent, tools=tool_list, verbose=True) +response = agent_executor.invoke({"question": "What's the weather in New York today?"}) +print(response) +``` +- **Explanation**: This code block illustrates how to build an agent using LangChain's `XMLAgent`. The agent includes a custom tool for weather information and logic to process and format intermediate steps. The agent is executed with a specific query, demonstrating its ability to manage and utilize its components effectively. + +--- + +### Section: Code Writing with LangChain +- **Objective**: To showcase how LangChain can be utilized for writing and executing Python code. This feature enhances the AI's ability to assist in programming tasks, making it a valuable tool for developers. +- **Example Code**: +```python +from langchain.chat_models import ChatOpenAI +from langchain.prompts import ChatPromptTemplate +from langchain.schema.output_parser import StrOutputParser +from langchain_experimental.utilities import PythonREPL + +# Creating a prompt template to instruct the model to write Python code +template = "Write Python code to solve the following problem: {problem}" +prompt = ChatPromptTemplate.from_messages([("system", template), ("human", "{problem}")]) + +# Initializing the chat model +model = ChatOpenAI() + +# Function to sanitize and extract Python code from the model's output +def sanitize_output(text): + _, after = text.split("```python") + return after.split("```")[0] + +# Building the chain for code writing +chain = prompt | model | StrOutputParser() | sanitize_output | PythonREPL().run + +# Invoking the chain with a programming problem +problem = "calculate the factorial of a number" +code_result = chain.invoke({"problem": problem}) +print(code_result) +``` +- **Explanation**: This code block demonstrates how LangChain can be used to automatically generate Python code in response to a given problem statement. The `ChatPromptTemplate` guides the AI to focus on code generation, and the output is sanitized and executed using `PythonREPL`. This illustrates LangChain's capability in automating and assisting with coding tasks. + +--- + +### Section: LangServe + +#### Basic Deployment and Querying with GPT-3.5-Turbo +- **Example**: Deploying and querying the GPT-3.5-Turbo model using LangServe. +- **Objective**: To illustrate the use of LangServe within the LangChain ecosystem. LangServe is designed to facilitate server-side functionalities for managing and deploying language models, making it an essential tool for scalable and efficient AI applications. +```python +from langserve import LangServeClient + +# Initialize the LangServe client +langserve_client = LangServeClient(api_url="https://api.langserve.com") + +# Deploying the GPT-3.5-Turbo model +model_config = { + "model_name": "gpt-3.5-turbo", + "description": "GPT-3.5 Turbo model for general-purpose use" +} +deployment_response = langserve_client.deploy_model(model_config) +print("Deployment Status:", deployment_response.status) + +# Sending a query to the deployed model +query = "Explain the concept of machine learning in simple terms." +response = langserve_client.query_model(model_name="gpt-3.5-turbo", query=query) +print("Model Response:", response.content) +``` + +#### Advanced Deployment and Custom Configuration +- **Example**: Utilizing LangServe for deploying custom-configured models for specialized tasks. +```python +# Custom deployment with specific parameters +advanced_model_config = { + "model_name": "custom-gpt-model", + "description": "A custom-configured GPT model for specialized tasks", + "parameters": { + "temperature": 0.7, + "max_tokens": 150 + } +} +langserve_client.deploy_model(advanced_model_config) + +# Querying the custom model +custom_query = "Generate a technical summary of quantum computing." +custom_response = langserve_client.query_model(model_name="custom-gpt-model", query=custom_query) +print("Custom Model Response:", custom_response.content) +``` + +#### Model Management and Analytics +- **Example**: Managing deployed models and accessing detailed analytics. +```python +# Fetching model analytics +model_analytics = langserve_client.get_model_analytics(model_name="gpt-3.5-turbo") +print("Model Usage Analytics:", model_analytics) + +# Updating a deployed model's configuration +update_config = { + "temperature": 0.5, + "max_tokens": 200 +} +langserve_client.update_model_config(model_name="gpt-3.5-turbo", new_config=update_config) + +# Retrieving updated model details +updated_model_details = langserve_client.get_model_details(model_name="gpt-3.5-turbo") +print("Updated Model Details:", updated_model_details) +``` + +#### Integration with LangChain Applications +- **Example**: Demonstrating seamless integration of LangServe with LangChain. +```python +from langchain.chains import SimpleChain + +# Building a SimpleChain with a LangServe deployed model +chain = SimpleChain(model_name="gpt-3.5-turbo", langserve_client=langserve_client) + +# Executing the chain with a user query +chain_response = chain.execute("What are the latest trends in AI?") +print("Chain Response using LangServe Model:", chain_response) +``` + +#### LangSmith Tracing for Enhanced Monitoring +- **Objective**: Showcasing the use of LangSmith tracing within LangServe for detailed monitoring and analysis. +- **Example Code**: +```python +from langserve import LangServeClient +from langsmith import Tracing + +# Initialize LangServe client and enable LangSmith tracing +langserve_client = LangServeClient(api_url="https://api.langserve.com") +Tracing.enable() + +# Deploying a model with tracing enabled +model_config = { + "model_name": "gpt-3.5-turbo", + "description": "GPT-3.5 Turbo model with LangSmith tracing" +} +langserve_client.deploy_model(model_config) + +# Query with tracing for detailed interaction logs +query = "Explain the impact of AI on environmental sustainability." +response = langserve_client.query_model(model_name="gpt-3.5-turbo", query=query) +print("Traced Model Response:", response.content) + +# Retrieve and analyze trace logs +trace_logs = Tracing.get_logs() +print("Trace Logs:", trace_logs) +``` +- **Explanation**: This section highlights the integration of LangSmith tracing in LangServe, enhancing the capability to monitor and analyze model interactions. It is particularly valuable for understanding model behavior, performance optimization, and debugging complex scenarios. + +### LangSmith Enhanced Capabilities: Integrating Lilac, Prompt Versioning, and More + +#### Introduction +LangSmith, complemented by tools like Lilac, offers advanced capabilities for data analysis and prompt management. This section explores how to leverage these tools for enhanced functionality in LangSmith, incorporating prompt versioning, retrieval QA chains, and editable prompt templates. + +#### Integrating Lilac for Enhanced Data Analysis +- **Functionality**: Utilize Lilac to import, enrich, and analyze datasets from LangSmith. +- **Workflow**: + 1. Query datasets from LangSmith. + 2. Import and enrich datasets using Lilac's advanced analysis tools. + 3. Export the processed data for further application within LangSmith. + +#### Advanced Prompt Management with Versioning +- **Functionality**: Manage different versions of prompts in LangSmith to ensure consistency and accuracy. +- **Application**: + 1. Track and manage versions of prompts. + 2. Apply specific prompt versions in complex deployments like retrieval QA chains. + +#### Retrieval QA Chains +- **Functionality**: Configure retrieval QA chains in LangSmith, leveraging the specific versions of prompts for precise information retrieval. +- **Implementation**: + 1. Define the prompt and its version for the QA chain. + 2. Execute queries using the retrieval QA chain to obtain accurate results. + +#### Editable Prompt Templates +- **Functionality**: Use editable prompt templates to customize and experiment with different prompt structures in LangSmith. +- **Usage**: + 1. Create and edit prompt templates dynamically. + 2. Apply edited templates in LangSmith workflows for varied applications. + +#### Comprehensive Code Example +```python +# Import necessary libraries +# Import necessary libraries +import langchain +from langchain.prompt_templates import EditablePromptTemplate +# Assuming LangSmith and Lilac libraries are imported correctly + +# LangSmith setup (assuming required configurations and authentications are done) +langsmith.initialize(api_key="YOUR_LANGSMITH_API_KEY", endpoint="https://api.langsmith.com") + +# Query and fetch datasets from LangSmith using the list_runs method +project_runs = langsmith.client.list_runs(project_name="your_project_name") + +# Import dataset into Lilac and enrich it +lilac_dataset = lilac.import_dataset(project_runs) +lilac_dataset.compute_signal(lilac.PIISignal(), 'question') # Example signal +lilac_dataset.compute_signal(lilac.NearDuplicateSignal(), 'output') # Another example signal + +# Export the enriched dataset for integration with LangSmith +exported_dataset = lilac.export_dataset(lilac_dataset) + +# Implementing Prompt Versioning (assuming the existence of such functionality in LangSmith) +prompt_version = 'specific_version_hash' +prompt_name = 'your_prompt_name' +prompt = langsmith.load_prompt(prompt_name, version=prompt_version) + +# Configuring a Retrieval QA Chain with the versioned prompt +qa_chain = langchain.RetrievalQAChain(prompt=prompt) + +# Execute a query using the QA Chain +query_result = qa_chain.query("What is LangSmith's functionality?") +print(f"QA Chain Query Result: {query_result}") + +# Editable Prompt Templates for dynamic prompt editing +editable_prompt = EditablePromptTemplate(prompt_name) +editable_prompt.edit(new_template="New template content for LangSmith") +edited_prompt = editable_prompt.apply() + +# Example usage of the edited prompt in a LangSmith application +edited_prompt_result = langsmith.run_prompt(edited_prompt, input_data="Sample input for edited prompt") +print(f"Edited Prompt Result: {edited_prompt_result}") + +# Final step: Integrate the exported dataset back into LangSmith for further use +integration_status = langsmith.integrate_dataset(exported_dataset) +if integration_status.success: + print("Dataset successfully integrated back into LangSmith.") +else: + print(f"Integration failed with error: {integration_status.error}") +``` + +#### Conclusion +By integrating these diverse functionalities, LangSmith users can significantly enhance their language model applications. This synergy between LangSmith and tools like Lilac, along with advanced prompt management techniques, paves the way for more sophisticated and effective AI solutions. + +--- + +## Conclusion + +In this guide, we have explored the intricate functionalities and applications of LangChain, LangServe, and LangSmith. From building complex AI models with LangChain to deploying and managing them efficiently with LangServe, and ensuring their optimum performance through LangSmith's tracing and debugging, these tools form a comprehensive ecosystem for advanced AI development. + +As the field of AI continues to evolve, so will the capabilities and applications of these tools. Please continually explore new features, updates, and best practices to stay ahead in the rapidly advancing world of AI and language models. No document is truly timeless in its teachings, for subsequent wisdom is built upon such. + + +For further learning and support, explore the following resources: + +- [LangChain Interface](https://python.langchain.com/docs/expression_language/interface) +- [LangChain Cookbook - Prompt + LLM](https://python.langchain.com/docs/expression_language/cookbook/prompt_llm_parser) +- [LangChain Cookbook - Embedding Router](https://python.langchain.com/docs/expression_language/cookbook/embedding_router) +- [LangChain Cookbook - Agent](https://python.langchain.com/docs/expression_language/cookbook/agent) +- [LangChain Cookbook - Code Writing](https://python.langchain.com/docs/expression_language/cookbook/code_writing) +- [LangChain Cookbook - Memory](https://python.langchain.com/docs/expression_language/cookbook/memory) +- [LangChain Cookbook - Managing Prompt Size](https://python.langchain.com/docs/expression_language/cookbook/prompt_size) +- [LangChain Cookbook - Tools](https://python.langchain.com/docs/expression_language/cookbook/tools) + +Thank you for engaging with this documentation. May serve as a valuable resource in your journey to mastering LangChain, LangServe, and LangSmith. + +---