Skip to content

Commit

Permalink
Merge pull request #803 from Undertone0809/v1.17.0/aichat-add-memo
Browse files Browse the repository at this point in the history
feat: AIChat add memory
  • Loading branch information
Undertone0809 authored Jul 20, 2024
2 parents 2da8b05 + 7657a22 commit f7eca73
Show file tree
Hide file tree
Showing 11 changed files with 142 additions and 69 deletions.
9 changes: 5 additions & 4 deletions docs/_sidebar.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,7 @@
- Get started
- [:bookmark_tabs: Introduction](README.md)
- [:bookmark: Quick Start](get_started/quick_start.md#quick-start)
- [Introduction](README.md)
- [Quick Start](get_started/quick_start.md#quick-start)
- [How-to Guide](get_started/how-to-guide.md#how-to-guides)

- Use Cases
- [Best practices](use_cases/intro.md#use-cases)
Expand All @@ -11,9 +12,9 @@

- Modules
- [:robot: Agent](modules/agent.md#agent)
- [:alien: Assistant Agent](modules/agents/assistant_agent_usage.md#assistant-agent)
- [Assistant Agent](modules/agents/assistant_agent_usage.md#assistant-agent)
- [:notebook_with_decorative_cover: LLMs](modules/llm/llm.md#llm)
- [ LLM Factory](modules/llm/llm-factory-usage.md#LLMFactory)
- [LLM Factory](modules/llm/llm-factory-usage.md#LLMFactory)
- [Custom LLM](modules/llm/custom_llm.md#custom-llm)
- [OpenAI](modules/llm/openai.md#openai)
- [Erniebot 百度文心](modules/llm/erniebot.md#百度文心erniebot)
Expand Down
18 changes: 18 additions & 0 deletions docs/get_started/how-to-guide.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,18 @@
# How-to guides

Here you’ll find answers to “How do I….?” types of questions. These guides are goal-oriented and concrete; they're meant to help you complete a specific task.

## Key features

This highlights functionality that is core to using Promptulate.

- [How to: return structured data from a model](use_cases/chat_usage.md#structured-output)


- [How to write model name in pne](other/how_to_write_model_name.md)


- [How to use pne.chat() and AIChat()](use_cases/chat_usage.md#chat)


- [How to build a streamlit app by pne](use_cases/streamlit+pne.chat().md#build-a-simple-chatbot-using-streamlit-and-pne)
17 changes: 8 additions & 9 deletions docs/get_started/quick_start.md
Original file line number Diff line number Diff line change
Expand Up @@ -29,7 +29,7 @@ The following diagram shows the core architecture of `promptulate`:
Now let's see how to use `pne.chat()` to chat with the model. The following example we use `gpt-4-turbo` to chat with the model.

```python
import promptulate as pne
import pne

response: str = pne.chat(messages="What is the capital of China?", model="gpt-4-turbo")
```
Expand Down Expand Up @@ -85,7 +85,7 @@ The powerful model support of pne allows you to easily build any third-party mod
Now let's see how to run local llama3 models of ollama with pne.

```python
import promptulate as pne
import pne

resp: str = pne.chat(model="ollama/llama2", messages=[{"content": "Hello, how are you?", "role": "user"}])
```
Expand All @@ -95,7 +95,7 @@ resp: str = pne.chat(model="ollama/llama2", messages=[{"content": "Hello, how ar
You can use the available multimodal capabilities of it in any of your promptulate applications!

```python
import promptulate as pne
import pne

messages=[
{
Expand Down Expand Up @@ -163,7 +163,7 @@ pne.chat() 是 pne 中最强大的函数,在实际的 LLM Agent 应用开发

```python
from typing import List
import promptulate as pne
import pne
from pydantic import BaseModel, Field

class LLMResponse(BaseModel):
Expand All @@ -183,7 +183,7 @@ provinces=['Anhui', 'Fujian', 'Gansu', 'Guangdong', 'Guizhou', 'Hainan', 'Hebei'

```python
import os
import promptulate as pne
import pne
from langchain.agents import load_tools

os.environ["OPENAI_API_KEY"] = "your-key"
Expand Down Expand Up @@ -292,7 +292,7 @@ Agent是`promptulate`的核心组件之一,其核心思想是使用llm、Tool
下面的示例展示了如何使用`ToolAgent`结合Tool进行使用。

```python
import promptulate as pne
import pne
from promptulate.tools import (
DuckDuckGoTool,
Calculator,
Expand Down Expand Up @@ -338,7 +338,7 @@ Below is an example of how to use the promptulate and langchain libraries to cre
> You need to set the `OPENAI_API_KEY` environment variable to your OpenAI API key. Click [here](https://undertone0809.github.io/promptulate/#/modules/tools/langchain_tool_usage?id=langchain-tool-usage) to see the detail.
```python
import promptulate as pne
import pne
from langchain.agents import load_tools

tools: list = load_tools(["dalle-image-generator"])
Expand All @@ -363,10 +363,9 @@ Here is the generated image: [![Halloween Night at a Haunted Museum](https://oai
下面的示例展示了在 WebAgent 中使用格式化输出的最佳实践:

```python
import pne
from pydantic import BaseModel, Field

import promptulate as pne


class Response(BaseModel):
city: str = Field(description="City name")
Expand Down
17 changes: 15 additions & 2 deletions docs/index.html
Original file line number Diff line number Diff line change
Expand Up @@ -101,8 +101,21 @@
<script src="//cdn.jsdelivr.net/npm/prismjs@1/components/prism-bash.min.js"></script>
<script src="//cdn.jsdelivr.net/npm/prismjs@1/components/prism-python.min.js"></script>
<script src="//cdn.jsdelivr.net/npm/prismjs@1/components/prism-json.min.js"></script>

<script id="embedai" src="https://embedai.thesamur.ai/embedai.js" data-id="pne-docs"></script>
<script>
window.difyChatbotConfig = {
token: 'a8m07lCHAAl3Nh9K'
}
</script>
<script
src="https://udify.app/embed.min.js"
id="a8m07lCHAAl3Nh9K"
defer>
</script>
<style>
#dify-chatbot-bubble-button {
background-color: #f67559 !important;
}
</style>

<!-- sidebar plugin -->
<!--<link rel="stylesheet" href="//cdn.jsdelivr.net/npm/docsify-sidebar-collapse/dist/sidebar.min.css" />-->
Expand Down
4 changes: 0 additions & 4 deletions docs/use_cases/streamlit+pne.chat().md
Original file line number Diff line number Diff line change
Expand Up @@ -135,7 +135,3 @@ pip install -r requirements.txt
```shell
streamlit run app.py
```
The running result is as follows:
![streamlit+pne](./img/streamlit+pne.png)
Original file line number Diff line number Diff line change
Expand Up @@ -37,7 +37,6 @@
{
"cell_type": "code",
"execution_count": 1,
"outputs": [],
"source": [
"OPENAI_API_KEY=\"your_openai_api_key\""
],
Expand All @@ -48,7 +47,8 @@
"start_time": "2024-05-16T12:59:12.434018100Z"
}
},
"id": "bdba0ee6cdddfda1"
"id": "bdba0ee6cdddfda1",
"outputs": []
},
{
"cell_type": "markdown",
Expand Down Expand Up @@ -115,7 +115,6 @@
{
"cell_type": "code",
"execution_count": 2,
"outputs": [],
"source": [
"from promptulate.tools.wikipedia.tools import wikipedia_search\n",
"from promptulate.tools.math.tools import calculator\n",
Expand All @@ -128,7 +127,8 @@
"start_time": "2024-05-16T12:59:12.454715600Z"
}
},
"id": "bf24915502504abb"
"id": "bf24915502504abb",
"outputs": []
},
{
"cell_type": "markdown",
Expand Down Expand Up @@ -265,7 +265,6 @@
{
"cell_type": "code",
"execution_count": 6,
"outputs": [],
"source": [
"# reasoning based tool\n",
"def word_problem_tool(question: str) -> str:\n",
Expand Down Expand Up @@ -293,7 +292,8 @@
"start_time": "2024-05-16T12:59:15.261878200Z"
}
},
"id": "9e135028cb2f3b9"
"id": "9e135028cb2f3b9",
"outputs": []
},
{
"cell_type": "markdown",
Expand All @@ -309,34 +309,6 @@
{
"cell_type": "code",
"execution_count": 7,
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\u001B[31;1m\u001B[1;3m[Agent] Tool Agent start...\u001B[0m\n",
"\u001B[36;1m\u001B[1;3m[User instruction] I have 3 apples and 4 oranges.I give half of my oranges away and buy two dozen new ones,along with three packs of strawberries.Each pack of strawberry has 30 strawberries.How many total pieces of fruit do I have at the end?\u001B[0m\n",
"\u001B[33;1m\u001B[1;3m[Thought] To determine the total number of pieces of fruit, we calculate the number of remaining oranges after giving half away, then add the new oranges purchased and the total strawberries from the three packs. Apples remain constant at 3.\u001B[0m\n",
"\u001B[33;1m\u001B[1;3m[Action] word_problem_tool args: {'question': 'If a person has 4 oranges and gives half away, how many are left? They then buy 24 more oranges and acquire 3 packs of strawberries with each pack containing 30 strawberries. How many pieces of fruit do they have in total if they originally had 3 apples?'}\u001B[0m\n",
"\u001B[33;1m\u001B[1;3m[Observation] - Start with the number of oranges the person initially has: 4 oranges.\n",
"- They give away half of these oranges, so let's calculate half of 4:\n",
" - 4 / 2 = 2 oranges.\n",
"- The number of oranges the person has left after giving half away is 2.\n",
"- The person then buys 24 more oranges, so we add these to the remaining oranges:\n",
" - 2 oranges (remaining after giving away half) + 24 oranges (bought) = 26 oranges.\n",
"- Next, we account for the acquired strawberry packs. There are 3 packs, each with 30 strawberries.\n",
" - 3 packs * 30 strawberries per pack = 90 strawberries.\n",
"- The person originally had 3 apples, which we'll add to the total count of fruit:\n",
" - 3 apples (originally had).\n",
"- To calculate the total pieces of fruit, add the number of oranges, strawberries, and apples together:\n",
" - 26 oranges (after transactions) + 90 strawberries (from the packs) + 3 apples (originally had) = 119 pieces of fruit.\n",
"- Final answer: The person has 119 pieces of fruit in total.\u001B[0m\n",
"\u001B[32;1m\u001B[1;3m[Agent Result] 119\u001B[0m\n",
"\u001B[38;5;200m\u001B[1;3m[Agent] Agent End.\u001B[0m\n",
"119\n"
]
}
],
"source": [
"# agent\n",
"agent = pne.ToolAgent(tools=[wikipedia_tool, math_tool, word_problem_tool],\n",
Expand All @@ -352,7 +324,8 @@
"start_time": "2024-05-16T12:59:15.277354200Z"
}
},
"id": "5b03977f863172b"
"id": "5b03977f863172b",
"outputs": []
},
{
"cell_type": "markdown",
Expand Down
2 changes: 1 addition & 1 deletion promptulate/agents/base.py
Original file line number Diff line number Diff line change
Expand Up @@ -48,7 +48,7 @@ def run(
prompt = (
f"{formatter.get_formatted_instructions()}\n##User input:\n{result}"
)
json_response = self.get_llm()(prompt)
json_response: str = self.get_llm()(prompt)
return formatter.formatting_result(json_response)

Hook.call_hook(
Expand Down
46 changes: 34 additions & 12 deletions promptulate/chat.py
Original file line number Diff line number Diff line change
@@ -1,6 +1,7 @@
from typing import Dict, List, Optional, TypeVar, Union

from promptulate.agents.base import BaseAgent
from promptulate.agents.tool_agent.agent import ToolAgent
from promptulate.beta.agents.assistant_agent import AssistantAgent
from promptulate.llms import BaseLLM
from promptulate.llms.factory import LLMFactory
Expand All @@ -11,6 +12,7 @@
BaseMessage,
MessageSet,
StreamIterator,
SystemMessage,
)
from promptulate.tools.base import BaseTool, ToolTypes
from promptulate.utils.logger import logger
Expand All @@ -30,7 +32,6 @@ def _convert_message(messages: Union[List, MessageSet, str]) -> MessageSet:
"""
if isinstance(messages, str):
messages: List[Dict] = [
{"content": "You are a helpful assistant", "role": "system"},
{"content": messages, "role": "user"},
]
if isinstance(messages, list):
Expand Down Expand Up @@ -79,6 +80,7 @@ def __init__(
tools: Optional[List[ToolTypes]] = None,
custom_llm: Optional[BaseLLM] = None,
enable_plan: bool = False,
enable_memory: bool = False,
):
"""Initialize the AIChat.
Expand All @@ -89,18 +91,20 @@ def __init__(
will use Agent to run.
custom_llm(Optional[BaseLLM]): custom LLM instance.
enable_plan(bool): use Agent with plan ability if True.
enable_memory(bool): enable memory if True.
"""
self.llm: BaseLLM = _get_llm(model, model_config, custom_llm)
self.tools: Optional[List[ToolTypes]] = tools
self.agent: Optional[BaseAgent] = None

self.enable_memory: bool = enable_memory
self.memory: MessageSet = MessageSet(messages=[])

if tools:
if enable_plan:
self.agent = AssistantAgent(tools=self.tools, llm=self.llm)
logger.info("[pne chat] invoke AssistantAgent with plan ability.")
else:
from promptulate.agents.tool_agent.agent import ToolAgent

self.agent = ToolAgent(tools=self.tools, llm=self.llm)
logger.info("[pne chat] invoke ToolAgent.")

Expand All @@ -113,7 +117,7 @@ def run(
stream: bool = False,
**kwargs,
) -> Union[str, BaseMessage, T, List[BaseMessage], StreamIterator]:
"""Run the AIChat.
"""Run the AIChat, AIChat use self.memory to store chat messages.
Args:
messages(Union[List, MessageSet, str]): chat messages. It can be str or
Expand All @@ -139,33 +143,51 @@ def run(
"stream, tools and output_schema can't be True at the same time, "
"because stream is used to return Iterator[BaseMessage]."
)

if not self.enable_memory:
self.memory: MessageSet = MessageSet(messages=[])

_: MessageSet = _convert_message(messages)
self.memory.add_from_message_set(_)

# initialize memory with system message if it is empty
if len(self.memory.messages) == 1:
self.memory.messages = [
SystemMessage(content="You are a helpful assistant"),
*self.memory.messages,
]

if self.agent:
return self.agent.run(messages, output_schema=output_schema)
response: Union[str, BaseModel] = self.agent.run(
self.memory.string_messages, output_schema=output_schema
)
self.memory.add_ai_message(response)

messages: MessageSet = _convert_message(messages)
return response

# add output format into the last prompt if provide
if output_schema:
instruction: str = get_formatted_instructions(
json_schema=output_schema, examples=examples
)
messages.messages[-1].content += f"\n{instruction}"
self.memory.messages[-1].content += f"\n{instruction}"
logger.info(f"[pne chat] messages: {messages}")

response: Union[AssistantMessage, StreamIterator] = self.llm.predict(
messages, stream=stream, **kwargs
self.memory, stream=stream, **kwargs
)

# TODO: add stream memory support
if stream:
return response

if isinstance(response, AssistantMessage):
# Access additional_kwargs only if response is AssistantMessage
logger.info(f"[pne chat] response: {response.additional_kwargs}")
logger.info(
f"[pne chat] response: {response.additional_kwargs or response.content}"
)
self.memory.add_ai_message(response.content)

# return output format if provide
if output_schema:
logger.info("[pne chat] return formatted response.")
return formatting_result(
pydantic_obj=output_schema, llm_output=response.content
)
Expand Down
Loading

0 comments on commit f7eca73

Please sign in to comment.