From 2854d9a98348cfea0a030f27e8ea41adde48214a Mon Sep 17 00:00:00 2001 From: "panxuchen.pxc" Date: Fri, 19 Jan 2024 17:43:28 +0800 Subject: [PATCH] fix sphinx link --- .../source/tutorial/101-installation.md | 8 ++-- .../source/tutorial/102-concepts.md | 7 ++- .../sphinx_doc/source/tutorial/103-example.md | 19 ++++---- .../sphinx_doc/source/tutorial/104-usecase.md | 43 +++++++++++-------- .../sphinx_doc/source/tutorial/105-logging.md | 12 ++++-- docs/sphinx_doc/source/tutorial/201-agent.md | 25 ++++++----- .../source/tutorial/202-pipeline.md | 2 + docs/sphinx_doc/source/tutorial/203-model.md | 20 +++++---- .../sphinx_doc/source/tutorial/204-service.md | 13 +++--- docs/sphinx_doc/source/tutorial/205-memory.md | 14 +++--- docs/sphinx_doc/source/tutorial/206-prompt.md | 8 ++-- .../sphinx_doc/source/tutorial/207-monitor.md | 6 +-- .../source/tutorial/208-distribute.md | 4 +- .../source/tutorial/301-community.md | 6 +-- .../source/tutorial/302-contribute.md | 7 +-- docs/sphinx_doc/source/tutorial/advance.rst | 2 +- docs/sphinx_doc/source/tutorial/main.md | 30 ++++++------- 17 files changed, 120 insertions(+), 106 deletions(-) diff --git a/docs/sphinx_doc/source/tutorial/101-installation.md b/docs/sphinx_doc/source/tutorial/101-installation.md index 6b05879a0..fdde47b4f 100644 --- a/docs/sphinx_doc/source/tutorial/101-installation.md +++ b/docs/sphinx_doc/source/tutorial/101-installation.md @@ -1,3 +1,5 @@ +(101-installation)= + # Installation To install AgentScope, you need to have Python 3.9 or higher installed. We recommend setting up a new virtual environment specifically for AgentScope: @@ -44,7 +46,6 @@ pip install agentscope pip install agentscope[distribute] # On Mac use `pip install agentscope\[distribute\]` ``` - #### Install from Source For users who prefer to install AgentScope directly from the source code, follow these steps to clone the repository and install the platform in editable mode: @@ -62,9 +63,6 @@ pip install -e . pip install -e .[distribute] # On Mac use `pip install -e .\[distribute\]` ``` - **Note**: The `[distribute]` option installs additional dependencies required for distributed applications. Remember to activate your virtual environment before running these commands. - - -[[Return to the top]](#installation) \ No newline at end of file +[[Return to the top]](#installation) diff --git a/docs/sphinx_doc/source/tutorial/102-concepts.md b/docs/sphinx_doc/source/tutorial/102-concepts.md index 8b98182cc..a83da431e 100644 --- a/docs/sphinx_doc/source/tutorial/102-concepts.md +++ b/docs/sphinx_doc/source/tutorial/102-concepts.md @@ -1,3 +1,5 @@ +(102-concepts)= + # Fundamental Concepts In this tutorial, you'll have an initial understanding of the **fundamental concepts** of AgentScope. We will focus on how a multi-agent application runs based on our platform and familiarize you with the essential terms. Let's get started! @@ -12,7 +14,6 @@ In this tutorial, you'll have an initial understanding of the **fundamental conc * **Service** is a collection of functionality tools (e.g., web search, code interpreter, file processing) that provide specific capabilities or processes that are independent of an agent's memory state. Services can be invoked by agents or other components and designed to be reusable across different scenarios. * **Pipeline** refers to the interaction order or pattern of agents in a task. AgentScope provides built-in `pipelines` to streamline the process of collaboration across multiple agents, such as `SequentialPipeline` and `ForLoopPipeline`. When a `Pipeline` is executed, the *message* passes from predecessors to successors with intermediate results for the task. - ## Code Structure ```bash @@ -42,6 +43,4 @@ AgentScope └── ... .. ``` - - -[[Return to the top]](#fundamental-concepts) \ No newline at end of file +[[Return to the top]](#fundamental-concepts) diff --git a/docs/sphinx_doc/source/tutorial/103-example.md b/docs/sphinx_doc/source/tutorial/103-example.md index 02995edc2..e81fa29c2 100644 --- a/docs/sphinx_doc/source/tutorial/103-example.md +++ b/docs/sphinx_doc/source/tutorial/103-example.md @@ -1,3 +1,5 @@ +(103-example)= + # Getting Started with a Simple Example AgentScope is a versatile platform for building and running multi-agent applications. We provide various pre-built examples that will help you quickly understand how to create and use multi-agent for various applications. In this tutorial, you will learn how to set up a **simple agent-based interaction**. @@ -13,10 +15,9 @@ Agent is the basic composition and communication unit in AgentScope. To initiali | Embedding | `openai_embedding` | API for text embeddings | | General usages in POST | `post_api` | *Huggingface* and *ModelScope* Inference API, and other customized post API | - Each API has its specific configuration requirements. For example, to configure an OpenAI API, you would need to fill out the following fields in the model config in a dict, a yaml file or a json file: -``` +```python model_config = { "type": "openai", # Choose from "openai", "openai_dall_e", or "openai_embedding" "name": "{your_config_name}", # A unique identifier for your config @@ -26,9 +27,10 @@ model_config = { } ``` -For open-source models, we support integration with various model interfaces such as HuggingFace, ModelScope, FastChat, and vllm. You can find scripts on deploying these services in the `scripts` directory, and we defer the detailed instructions to [[Using Different Model Sources with Model API]](https://alibaba.github.io/AgentScope/tutorial/203-model.html). +For open-source models, we support integration with various model interfaces such as HuggingFace, ModelScope, FastChat, and vllm. You can find scripts on deploying these services in the `scripts` directory, and we defer the detailed instructions to [[Using Different Model Sources with Model API]](203-model). You can register your configuration by calling AgentScope's initilization method as follow. Besides, you can also load more than one config by calling init mutliple times. + ```python import agentscope @@ -38,7 +40,6 @@ modelscope_cfg_dict = {...dict_filling...} agentscope.init(model_configs=[openai_cfg_dict, modelscope_cfg_dict]) ``` - ## Step2: Create Agents Creating agents is straightforward in AgentScope. After initializing AgentScope with your model configurations (Step 1 above), you can then define each agent with its corresponding role and specific model. @@ -55,7 +56,7 @@ dialogAgent = DialogAgent(name="assistant", model="gpt-4") userAgent = UserAgent() ``` -**NOTE**: Please refer to [[Customizing Your Custom Agent with Agent Pool]](https://alibaba.github.io/AgentScope/tutorial/201-agent.html) for all available agents. +**NOTE**: Please refer to [[Customizing Your Custom Agent with Agent Pool]](201-agent) for all available agents. ## Step3: Agent Conversation @@ -81,7 +82,7 @@ while True: # Terminate the conversation if the user types "exit" if x.content == "exit": - print("Exiting the conversation.") + print("Exiting the conversation.") break ``` @@ -93,11 +94,9 @@ from agentscope.pipelines.functional import sequentialpipeline # Execute the conversation loop within a pipeline structure x = None while x is None or x.content != "exit": - x = sequentialpipeline([dialog_agent, user_agent]) + x = sequentialpipeline([dialog_agent, user_agent]) ``` -For more details about how to utilize pipelines for complex agent interactions, please refer to [[Agent Interactions: Dive deeper into Pipelines and Message Hub]](https://alibaba.github.io/AgentScope/tutorial/202-pipeline.html). - - +For more details about how to utilize pipelines for complex agent interactions, please refer to [[Agent Interactions: Dive deeper into Pipelines and Message Hub]](202-pipeline). [[Return to the top]](#getting-started-with-a-simple-example) diff --git a/docs/sphinx_doc/source/tutorial/104-usecase.md b/docs/sphinx_doc/source/tutorial/104-usecase.md index edd35ea7b..a034d8bcc 100644 --- a/docs/sphinx_doc/source/tutorial/104-usecase.md +++ b/docs/sphinx_doc/source/tutorial/104-usecase.md @@ -1,3 +1,5 @@ +(104-usecase)= + # Crafting Your First Application img @@ -10,19 +12,25 @@ Let the adventure begin to unlock the potential of multi-agent applications with ## Getting Started -Firstly, ensure that you have installed and configured AgentScope properly. Besides, we will involve the basic concepts of `Model API`, `Agent`, `Msg`, and `Pipeline,` as described in [Tutorial-Concept](https://alibaba.github.io/AgentScope/tutorial/102-concepts.html). The overview of this tutorial is shown below: +Firstly, ensure that you have installed and configured AgentScope properly. Besides, we will involve the basic concepts of `Model API`, `Agent`, `Msg`, and `Pipeline,` as described in [Tutorial-Concept](102-concepts). The overview of this tutorial is shown below: -* [Step 1: Prepare **Model API** and Set Model Configs](#step-1-prepare-model-api-and-set-model-configs) -* [Step 2: Define the Roles of Each **Agent**](#step-2-define-the-roles-of-each-agent) -* [Step 3: **Initialize** AgentScope and the Agents](#step-3-initialize-agentscope-and-the-agents) -* [Step 4: Set Up the Game Logic with **Pipelines**](#step-4-set-up-the-game-logic-with-pipelines) -* [Step 5: **Run** the Application](#step-5-run-the-application) +- [Crafting Your First Application](#crafting-your-first-application) + - [Getting Started](#getting-started) + - [Step 1: Prepare Model API and Set Model Configs](#step-1-prepare-model-api-and-set-model-configs) + - [Step 2: Define the Roles of Each Agent](#step-2-define-the-roles-of-each-agent) + - [Step 3: Initialize AgentScope and the Agents](#step-3-initialize-agentscope-and-the-agents) + - [Step 4: Set Up the Game Logic](#step-4-set-up-the-game-logic) + - [Leverage Pipeline and MsgHub](#leverage-pipeline-and-msghub) + - [Implement Werewolf Pipeline](#implement-werewolf-pipeline) + - [Step 5: Run the Application](#step-5-run-the-application) + - [Next step](#next-step) + - [Other Example Applications](#other-example-applications) **Note**: all the configurations and code for this tutorial can be found in `examples/werewolf`. ### Step 1: Prepare Model API and Set Model Configs -As we discussed in the last tutorial, you need to prepare your model configurations into a JSON file for standard OpenAI chat API, FastChat, and vllm. More details and advanced usages such as configuring local models with POST API are presented in [Tutorial-Model-API](https://alibaba.github.io/AgentScope/tutorial/203-model.html). +As we discussed in the last tutorial, you need to prepare your model configurations into a JSON file for standard OpenAI chat API, FastChat, and vllm. More details and advanced usages such as configuring local models with POST API are presented in [Tutorial-Model-API](203-model). ```json [ @@ -138,7 +146,7 @@ To simplify the construction of agent communication, AgentScope provides two hel The game logic is divided into two major phases: (1) night when werewolves act, and (2) daytime when all players discuss and vote. Each phase will be handled by a section of code using pipelines to manage multi-agent communications. -* **1.1 Night Phase: Werewolves Discuss and Vote** +- **1.1 Night Phase: Werewolves Discuss and Vote** During the night phase, werewolves must discuss among themselves to decide on a target. The `msghub` function creates a message hub for the werewolves to communicate in, where every message sent by an agent is observable by all other agents within the `msghub`. @@ -169,7 +177,7 @@ After the discussion, werewolves proceed to vote for their target, and the major ) ``` -* **1.2 Witch's Turn** +- **1.2 Witch's Turn** If the witch is still alive, she gets the opportunity to use her powers to either save the player chosen by the werewolves or use her poison. @@ -178,7 +186,7 @@ If the witch is still alive, she gets the opportunity to use her powers to eithe healing_used_tonight = False if witch in survivors: if healing: - # Witch decides whether to use the healing potion + # Witch decides whether to use the healing potion hint = HostMsg( content=Prompts.to_witch_resurrect.format_map( {"witch_name": witch.name, "dead_name": dead_player[0]}, @@ -191,7 +199,7 @@ If the witch is still alive, she gets the opportunity to use her powers to eithe healing = False ``` -* **1.3 Seer's Turn** +- **1.3 Seer's Turn** The seer has a chance to reveal the true identity of a player. This information can be crucial for the villagers. The `observe()` function allows each agent to take note of a message without immediately replying to it. @@ -210,7 +218,7 @@ The seer has a chance to reveal the true identity of a player. This information seer.observe(hint) ``` -* **1.4 Update Alive Players** +- **1.4 Update Alive Players** Based on the actions taken during the night, the list of surviving players needs to be updated. @@ -219,7 +227,7 @@ Based on the actions taken during the night, the list of surviving players needs survivors, wolves = update_alive_players(survivors, wolves, dead_player) ``` -* **2.1 Daytime Phase: Discussion and Voting** +- **2.1 Daytime Phase: Discussion and Voting** During the day, all players will discuss and then vote to eliminate a suspected werewolf. @@ -239,7 +247,7 @@ During the day, all players will discuss and then vote to eliminate a suspected survivors, wolves = update_alive_players(survivors, wolves, vote_res) ``` -* **2.2 Check for Winning Conditions** +- **2.2 Check for Winning Conditions** After each phase, the game checks if the werewolves or villagers have won. @@ -249,7 +257,7 @@ After each phase, the game checks if the werewolves or villagers have won. break ``` -* **2.3 Continue to the Next Round** +- **2.3 Continue to the Next Round** If neither werewolves nor villagers win, the game continues to the next round. @@ -311,7 +319,6 @@ Moderator: The day is coming, all the players open your eyes. Last night is peac Now you've grasped how to conveniently set up a multi-agent application with AgentScope. Feel free to tailor the game to include additional roles and introduce more sophisticated strategies. For more advanced tutorials that delve deeper into more capabilities of AgentScope, such as *memory management* and *service functions* utilized by agents, please refer to the tutorials in the **Advanced Exploration** section and look up the API references. - ## Other Example Applications - Example of Simple Group Conversation: [examples/Simple Conversation](https://github.com/alibaba/AgentScope/tree/main/examples/simple_chat/README.md) @@ -319,6 +326,4 @@ Now you've grasped how to conveniently set up a multi-agent application with Age - Example of Distributed Agents: [examples/Distributed Agents](https://github.com/alibaba/AgentScope/tree/main/examples/distributed_agents/README.md) - ... - - -[[Return to the top]](#crafting-your-first-application) \ No newline at end of file +[[Return to the top]](#crafting-your-first-application) diff --git a/docs/sphinx_doc/source/tutorial/105-logging.md b/docs/sphinx_doc/source/tutorial/105-logging.md index cc87749bd..42e05b5ff 100644 --- a/docs/sphinx_doc/source/tutorial/105-logging.md +++ b/docs/sphinx_doc/source/tutorial/105-logging.md @@ -1,3 +1,5 @@ +(105-logging)= + # Logging and WebUI Welcome to the tutorial on logging in multi-agent applications with AgentScope. We'll also touch on how you can visualize these logs using a simple web interface. This guide will help you track the agent's interactions and system information in a clearer and more organized way. @@ -65,17 +67,22 @@ logger.error("The agent encountered an unexpected error while processing a reque To visualize these logs, we provide a customized gradio component in `src/agentscope/web_ui`. ### Quick Running + For convince, we provide the pre-built app in a wheel file, you can run the WebUI in the following command: + ```shell pip install gradio_groupchat-0.0.1-py3-none-any.whl python app.py ``` + After the init and entering the UI port printed by `app.py`, e.g., `http://127.0.0.1:7860/`, you can choose `run.log.demo` in the top-middle `FileSelector` window (it's a demo log file provided by us). Then, the dialog and system log should be shown correctly in the bottom windows. ![webui](https://img.alicdn.com/imgextra/i2/O1CN01hSaFue1EdL2yCEznc_!!6000000000374-2-tps-3066-1808.png) ### For Other Customization + To customize the backend, or the frontend of the provided WebUI, you can + ```shell # generate the template codes # for network connectivity problem, try to run @@ -91,12 +98,11 @@ gradio cc dev ``` If you want to release the modification, you can do + ```shell gradio cc build pip install python app.py ``` - - -[[Return to the top]](#logging-and-webui) \ No newline at end of file +[[Return to the top]](#logging-and-webui) diff --git a/docs/sphinx_doc/source/tutorial/201-agent.md b/docs/sphinx_doc/source/tutorial/201-agent.md index e1cfb7a95..e498f18d1 100644 --- a/docs/sphinx_doc/source/tutorial/201-agent.md +++ b/docs/sphinx_doc/source/tutorial/201-agent.md @@ -1,3 +1,5 @@ +(201-agent)= + # Customizing Your Own Agent This tutorial helps you to understand the `Agent` in mode depth and navigate through the process of crafting your own custom agent with AgentScope. We start by introducing the fundamental abstraction called `AgentBase`, which serves as the base class to maintain the general behaviors of all agents. Then, we will go through the *AgentPool*, an ensemble of pre-built, specialized agents, each designed with a specific purpose in mind. Finally, we will demonstrate how to customize your own agent, ensuring it fits the needs of your project. @@ -8,11 +10,11 @@ The `AgentBase` class is the architectural cornerstone for all agent constructs Each AgentBase derivative is composed of several key characteristics: -* `memory`: This attribute enables agents to retain and recall past interactions, allowing them to maintain context in ongoing conversations. For more details about `memory`, we defer to [Memory and Message Management](https://alibaba.github.io/AgentScope/tutorial/205-memory.html). +* `memory`: This attribute enables agents to retain and recall past interactions, allowing them to maintain context in ongoing conversations. For more details about `memory`, we defer to [Memory and Message Management](205-memory). -* `model`: The model is the computational engine of the agent, responsible for making a response given existing memory and input. For more details about `model`, we defer to [Using Different Model Sources with Model API]https://alibaba.github.io/AgentScope/tutorial/203-model.html). +* `model`: The model is the computational engine of the agent, responsible for making a response given existing memory and input. For more details about `model`, we defer to [Using Different Model Sources with Model API](203-model). -* `sys_prompt` & `engine`: The system prompt acts as predefined instructions that guide the agent in its interactions; and the `engine` is used to dynamically generate a suitable prompt. For more details about them, we defer to [Prompt Engine](https://alibaba.github.io/AgentScope/tutorial/206-prompt.html). +* `sys_prompt` & `engine`: The system prompt acts as predefined instructions that guide the agent in its interactions; and the `engine` is used to dynamically generate a suitable prompt. For more details about them, we defer to [Prompt Engine](206-prompt). In addition to these attributes, `AgentBase` endows agents with pivotal methods such as `observe` and `reply`: @@ -21,7 +23,6 @@ In addition to these attributes, `AgentBase` endows agents with pivotal methods Besides, for unified interfaces and type hints, we introduce another base class `Operator`, which indicates performing some operation on input data by the `__call__` function. And we make `AgentBase` a subclass of `Operator`. - ```python class AgentBase(Operator): # ... [code omitted for brevity] @@ -37,7 +38,7 @@ class AgentBase(Operator): ) -> None: # ... [code omitted for brevity] - def observe(self, x: Union[dict, Sequence[dict]]) -> None: + def observe(self, x: Union[dict, Sequence[dict]]) -> None: # An optional method for updating the agent's internal state based on # messages it has observed. This method can be used to enrich the # agent's understanding and memory without producing an immediate @@ -58,7 +59,7 @@ class AgentBase(Operator): ## Exploring the AgentPool -The *AgentPool* within AgentScope is a curated ensemble of ready-to-use, specialized agents. Each of these agents is tailored for a distinct role and comes equipped with default behaviors that address specific tasks. The *AgentPool* is designed to expedite the development process by providing various templates of `Agent `. +The *AgentPool* within AgentScope is a curated ensemble of ready-to-use, specialized agents. Each of these agents is tailored for a distinct role and comes equipped with default behaviors that address specific tasks. The *AgentPool* is designed to expedite the development process by providing various templates of `Agent`. Below is a table summarizing the functionality of some of the key agents available in the Agent Pool: @@ -78,7 +79,7 @@ Below, we provide usages of how to configure various agents from the AgentPool: ### `DialogAgent` -- **Reply Method**: The `reply` method is where the main logic for processing input *message* and generating responses. +* **Reply Method**: The `reply` method is where the main logic for processing input *message* and generating responses. ```python def reply(self, x: dict = None) -> dict: @@ -101,7 +102,7 @@ def reply(self, x: dict = None) -> dict: return msg ``` -- **Usages:** To tailor a `DialogAgent` for a customer service bot: +* **Usages:** To tailor a `DialogAgent` for a customer service bot: ```python from agentscope.agents import DialogAgent @@ -121,7 +122,7 @@ service_bot = DialogAgent(**dialog_agent_config) ### `UserAgent` -- **Reply Method**: This method processes user input by prompting for content and if needed, additional keys and a URL. The gathered data is stored in a *message* object in the agent's memory for logging or later use and returns the message as a response. +* **Reply Method**: This method processes user input by prompting for content and if needed, additional keys and a URL. The gathered data is stored in a *message* object in the agent's memory for logging or later use and returns the message as a response. ```python def reply( @@ -156,7 +157,7 @@ def reply( return msg ``` -- **Usages:** To configure a `UserAgent` for collecting user input and URLs (of file, image, video, audio , or website): +* **Usages:** To configure a `UserAgent` for collecting user input and URLs (of file, image, video, audio , or website): ```python from agentscope.agents import UserAgent @@ -171,6 +172,4 @@ user_agent_config = { user_proxy_agent = UserAgent(**user_agent_config) ``` - - -[[Return to the top]](#customizing-your-own-agent) \ No newline at end of file +[[Return to the top]](#customizing-your-own-agent) diff --git a/docs/sphinx_doc/source/tutorial/202-pipeline.md b/docs/sphinx_doc/source/tutorial/202-pipeline.md index ebe23cd4d..d61313fe5 100644 --- a/docs/sphinx_doc/source/tutorial/202-pipeline.md +++ b/docs/sphinx_doc/source/tutorial/202-pipeline.md @@ -1,3 +1,5 @@ +(202-pipeline)= + # Agent Interactions: Dive deeper into Pipelines and Message Hub **Pipeline & MsgHub** (message hub) are one or a sequence of steps describing how the structured `Msg` passes between multi-agents, which streamlines the process of collaboration across agents. diff --git a/docs/sphinx_doc/source/tutorial/203-model.md b/docs/sphinx_doc/source/tutorial/203-model.md index 9d9d31cc3..92c8fde7c 100644 --- a/docs/sphinx_doc/source/tutorial/203-model.md +++ b/docs/sphinx_doc/source/tutorial/203-model.md @@ -1,3 +1,5 @@ +(203-model)= + # Using Different Model Sources with Model API AgentScope allows for the integration of multi-modal models from various sources. The core step is the initialization process, where once initialized with a certain config, all agent instances globally select the appropriate model APIs based on the model name specified (e.g., `model='gpt-4'`): @@ -21,7 +23,7 @@ where the model configs could be a list of dict: "temperature": 0.0 } }, - { + { "type": "openai_dall_e", "name": "dall-e-3", "parameters": { @@ -68,6 +70,7 @@ pip install Flask, transformers ``` Taking model `meta-llama/Llama-2-7b-chat-hf` and port `8000` as an example, set up the model API serving by running the following command. + ```bash python flask_transformers/setup_hf_service.py --model_name_or_path meta-llama/Llama-2-7b-chat-hf @@ -97,7 +100,6 @@ In AgentScope, you can load the model with the following model configs: `./flask In this model serving, the messages from post requests should be in **STRING** format. You can use [templates for chat model](https://huggingface.co/docs/transformers/main/chat_templating) from *transformers* with a little modification based on `./flask_transformers/setup_hf_service.py`. - #### With ModelScope Library ##### Install Libraries and Set up Serving @@ -119,7 +121,6 @@ python flask_modelscope/setup_ms_service.py You can replace `modelscope/Llama-2-7b-ms` with any model card in modelscope model hub. - ##### How to use AgentScope In AgentScope, you can load the model with the following model configs: `flask_modelscope/model_config.json`. @@ -140,7 +141,6 @@ In AgentScope, you can load the model with the following model configs: `flask_m Similar to the example of transformers, the messages from post requests should be in **STRING format**. - ### FastChat [FastChat](https://github.com/lm-sys/FastChat) is an open platform that provides a quick setup for model serving with OpenAI-compatible RESTful APIs. @@ -160,10 +160,13 @@ bash fastchat_script/fastchat_setup.sh -m meta-llama/Llama-2-7b-chat-hf -p 8000 ``` #### Supported Models + Refer to [supported model list](https://github.com/lm-sys/FastChat/blob/main/docs/model_support.md#supported-models) of FastChat. #### How to use in AgentScope + Now you can load the model in AgentScope by the following model config: `fastchat_script/model_config.json`. + ```json { "type": "openai", @@ -183,6 +186,7 @@ Now you can load the model in AgentScope by the following model config: `fastcha [vllm](https://github.com/vllm-project/vllm) is a high-throughput inference and serving engine for LLMs. #### Install Libraries and Set up Serving + To install vllm, run ```bash @@ -200,6 +204,7 @@ bash vllm_script/vllm_setup.sh -m meta-llama/Llama-2-7b-chat-hf -p 8000 Please refer to the [supported models list](https://docs.vllm.ai/en/latest/models/supported_models.html) of vllm. #### How to use in AgentScope + Now you can load the model in AgentScope by the following model config: `vllm_script/model_config.json`. ```json @@ -216,13 +221,12 @@ Now you can load the model in AgentScope by the following model config: `vllm_sc } ``` - ## Model Inference API Both [Huggingface](https://huggingface.co/docs/api-inference/index) and [ModelScope](https://www.modelscope.cn) provide model inference API, which can be used with AgentScope post API model wrapper. Taking `gpt2` in HuggingFace inference API as an example, you can use the following model config in AgentScope. -```bash +```json { "type": "post_api", "name": 'gpt2', @@ -247,6 +251,4 @@ model.eval() agent = YourAgent(name='agent', model=model, tokenizer=tokenizer) ``` - - -[[Return to the top]](#using-different-model-sources-with-model-api) \ No newline at end of file +[[Return to the top]](#using-different-model-sources-with-model-api) diff --git a/docs/sphinx_doc/source/tutorial/204-service.md b/docs/sphinx_doc/source/tutorial/204-service.md index 7a185871a..448627a28 100644 --- a/docs/sphinx_doc/source/tutorial/204-service.md +++ b/docs/sphinx_doc/source/tutorial/204-service.md @@ -1,3 +1,5 @@ +(204-service)= + # Enhancing Agent Capabilities with Service Functions **Service functions**, often referred to simply as **Service**, constitute a versatile suite of utility tools that can be used to enhance the functionality of agents. A service is designed to perform a specific task like web search, code interpretation, or file processing. Services can be invoked by agents and other components for reuse across different scenarios. @@ -9,14 +11,14 @@ The design behind `Service` distinguishes them from typical Python functions. In ```python def demo_service() -> ServiceResponse: #do some specifc actions - # ...... - res = ServiceResponse({status=status, content=content}) - return res + # ...... + res = ServiceResponse({status=status, content=content}) + return res class ServiceResponse(dict): """Used to wrap the execution results of the services""" - # ... [code omitted for brevity] + # ... [code omitted for brevity] def __init__( self, @@ -130,5 +132,4 @@ class YourAgent(AgentBase): # ... [code omitted for brevity] ``` - -[[Return to the top]](#enhancing-agent-capabilities-with-service-functions) \ No newline at end of file +[[Return to the top]](#enhancing-agent-capabilities-with-service-functions) diff --git a/docs/sphinx_doc/source/tutorial/205-memory.md b/docs/sphinx_doc/source/tutorial/205-memory.md index cdeba4f7e..728bc1f29 100644 --- a/docs/sphinx_doc/source/tutorial/205-memory.md +++ b/docs/sphinx_doc/source/tutorial/205-memory.md @@ -1,3 +1,5 @@ +(205-memory)= + # Memory and Message Management **Message** represents individual pieces of information or interactions flowing between/within agents. **Memory** refers to the storage and retrieval of historical information and serves as the storage and management system for the messages. This allows the agent to remember past interactions, maintain context, and provide more coherent and relevant responses. @@ -48,9 +50,9 @@ The `Msg` ("Message") subclass extends `MessageBase` and represents a standard * ```python class Msg(MessageBase): - # ... [code omitted for brevity] + # ... [code omitted for brevity] - def to_str(self) -> str: + def to_str(self) -> str: return f"{self.name}: {self.content}" def serialize(self) -> str: @@ -66,7 +68,7 @@ The `Tht` ("Thought") subclass is a specialized form of `MessageBase` used for e ```python class Tht(MessageBase): - # ... [code omitted for brevity] + # ... [code omitted for brevity] def to_str(self) -> str: return f"{self.name} thought: {self.content}" @@ -93,7 +95,7 @@ class MemoryBase(ABC): recent_n: Optional[int] = None, filter_func: Optional[Callable[[int, dict], bool]] = None, ) -> Union[list, str]: - raise NotImplementedError + raise NotImplementedError def add(self, memories: Union[list[dict], dict]) -> None: raise NotImplementedError @@ -141,6 +143,4 @@ The `TemporaryMemory` class is a concrete implementation of `MemoryBase`, provid For more details about the usage of `Memory` and `Msg`, please refer to the API references. - - -[[Return to the top]](#memory-and-message-management) \ No newline at end of file +[[Return to the top]](#memory-and-message-management) diff --git a/docs/sphinx_doc/source/tutorial/206-prompt.md b/docs/sphinx_doc/source/tutorial/206-prompt.md index 56df050ef..de580de47 100644 --- a/docs/sphinx_doc/source/tutorial/206-prompt.md +++ b/docs/sphinx_doc/source/tutorial/206-prompt.md @@ -1,3 +1,5 @@ +(206-prompt)= + # Prompt Engine **Prompt** is a crucial component in interacting with language models, especially when seeking to generate specific types of outputs or guide the model toward desired behaviors. This tutorial will guide you through the use of the `PromptEngine` class, which simplifies the process of crafting prompts for LLMs. @@ -6,7 +8,7 @@ The `PromptEngine` class provides a structured way to combine different components of a prompt, such as instructions, hints, dialogue history, and user inputs, into a format that is suitable for the underlying language model. -### Key Features of PromptEngine: +### Key Features of PromptEngine - **Model Compatibility**: It works with any `ModelWrapperBase` subclass. - **Shrink Policy**: It offers two policies for handling prompts that exceed the maximum length: `ShrinkPolicy.TRUNCATE` to simply truncate the prompt, and `ShrinkPolicy.SUMMARIZE` to summarize part of the dialog history to save space. @@ -61,6 +63,4 @@ hint_prompt = "Find the weather in {location}." prompt = engine.join(system_prompt, user_input, hint_prompt, format_map=variables) ``` - - -[[Return to the top]](#prompt-engine) \ No newline at end of file +[[Return to the top]](#prompt-engine) diff --git a/docs/sphinx_doc/source/tutorial/207-monitor.md b/docs/sphinx_doc/source/tutorial/207-monitor.md index 79f496af3..3c0c436f1 100644 --- a/docs/sphinx_doc/source/tutorial/207-monitor.md +++ b/docs/sphinx_doc/source/tutorial/207-monitor.md @@ -1,3 +1,5 @@ +(207-monitor)= + # Monitor In multi-agent applications, particularly those that rely on external model APIs, it's crucial to monitor the usage and cost to prevent overutilization and ensure compliance with rate limits. The `MonitorBase` class and its implementation, `DictMonitor`, provide a way to track and regulate the usage of such APIs in your applications. In this tutorial, you'll learn how to use them to monitor API calls. @@ -95,6 +97,4 @@ Get the singleton instance of the monitor: monitor = MonitorFactory.get_monitor() ``` - - -[[Return to the top]](#monitoring-and-logging) \ No newline at end of file +[[Return to the top]](#monitoring-and-logging) diff --git a/docs/sphinx_doc/source/tutorial/208-distribute.md b/docs/sphinx_doc/source/tutorial/208-distribute.md index 66ca7bcb5..d96a95d7d 100644 --- a/docs/sphinx_doc/source/tutorial/208-distribute.md +++ b/docs/sphinx_doc/source/tutorial/208-distribute.md @@ -1,3 +1,5 @@ +(208-distribute)= + # Make Your Applications Distributed AgentScope is designed to be fully distributed, agent instances in one application can be deployed on different machines and run in parallel. This tutorial will introduce the features of AgentScope distributed and the distributed deployment method. @@ -144,4 +146,4 @@ while x is None or x.content != 'exit': x = b(x) ``` -[[Return to the top]](#make-your-applications-distributed) \ No newline at end of file +[[Return to the top]](#make-your-applications-distributed) diff --git a/docs/sphinx_doc/source/tutorial/301-community.md b/docs/sphinx_doc/source/tutorial/301-community.md index c4cae8faf..60cec1923 100644 --- a/docs/sphinx_doc/source/tutorial/301-community.md +++ b/docs/sphinx_doc/source/tutorial/301-community.md @@ -1,3 +1,5 @@ +(301-community)= + # Joining The AgentScope Community Becoming a part of the AgentScope community allows you to connect with other users and developers. You can share insights, ask questions, and keep up-to-date with the latest developments and interesting multi-agent applications. Here's how you can join us: @@ -22,13 +24,11 @@ Becoming a part of the AgentScope community allows you to connect with other use Our DingTalk group invitation: [AgentScope DingTalk Group](https://qr.dingtalk.com/action/joingroup?code=v1,k1,20IUyRX5XZQ2vWjKDsjvI9dhcXjGZi3bq1pFfDZINCM=&_dt_no_comment=1&origin=11) ## Wechat + Scan the QR code below on Wechat to join: AgentScope-wechat --- We welcome everyone interested in AgentScope to join our community and contribute to the growth of the platform! - - [[Return to the top]](#joining-the-agentscope-community) - diff --git a/docs/sphinx_doc/source/tutorial/302-contribute.md b/docs/sphinx_doc/source/tutorial/302-contribute.md index b33762323..79f9b98e0 100644 --- a/docs/sphinx_doc/source/tutorial/302-contribute.md +++ b/docs/sphinx_doc/source/tutorial/302-contribute.md @@ -1,3 +1,5 @@ +(302-contribute)= + # Contributing to AgentScope Our community thrives on the diverse ideas and contributions of its members. Whether you're fixing a bug, adding a new feature, improving the documentation, or adding examples, your help is welcome. Here's how you can contribute: @@ -5,6 +7,7 @@ Our community thrives on the diverse ideas and contributions of its members. Whe ## Report Bugs and Ask For New Features? Did you find a bug or have a feature request? Please first check the issue tracker to see if it has already been reported. If not, feel free to open a new issue. Include as much detail as possible: + - A descriptive title - Clear description of the issue - Steps to reproduce the problem @@ -64,6 +67,4 @@ We will review your pull request. This process might involve some discussion, ad Wait for us to review your pull request. We may suggest some changes or improvements. Keep an eye on your GitHub notifications and be responsive to any feedback. - - -[[Return to the top]](#contributing-to-agentScope) \ No newline at end of file +[[Return to the top]](#contributing-to-agentScope) diff --git a/docs/sphinx_doc/source/tutorial/advance.rst b/docs/sphinx_doc/source/tutorial/advance.rst index fdabbee99..ff483b9b2 100644 --- a/docs/sphinx_doc/source/tutorial/advance.rst +++ b/docs/sphinx_doc/source/tutorial/advance.rst @@ -1,5 +1,5 @@ Advanced Exploration -=============== +==================== .. toctree:: :maxdepth: 2 diff --git a/docs/sphinx_doc/source/tutorial/main.md b/docs/sphinx_doc/source/tutorial/main.md index bbb8fa546..0460e6b98 100644 --- a/docs/sphinx_doc/source/tutorial/main.md +++ b/docs/sphinx_doc/source/tutorial/main.md @@ -12,24 +12,24 @@ AgentScope is an innovative multi-agent platform designed to empower developers ### Getting Started -- [Installation Guide](tutorial/101-installation.md) -- [Fundamental Concepts](tutorial/102-concepts.md) -- [Getting Started with a Simple Example](tutorial/103-example.md) -- [Crafting Your First Application](tutorial/104-usecase.md) -- [Logging and WebUI](tutorial/105-logging.md) +- [Installation Guide](101-installation) +- [Fundamental Concepts](102-concepts) +- [Getting Started with a Simple Example](103-example) +- [Crafting Your First Application](104-usecase) +- [Logging and WebUI](105-logging) ### Advanced Exploration -- [Customizing Your Own Agent](tutorial/201-agent.md) -- [Agent Interactions: Dive deeper into Pipelines and Message Hub](tutorial/202-pipeline.md) -- [Using Different Model Sources with Model API](tutorial/203-model.md) -- [Enhancing Agent Capabilities with Service Functions](tutorial/204-service.md) -- [Memory and Message Management](tutorial/205-memory.md) -- [Prompt Engine](tutorial/206-prompt.md) -- [Monitoring](tutorial/207-monitor.md) -- [Distributed Deployment](tutorial/208-distribute.md) +- [Customizing Your Own Agent](201-agent) +- [Agent Interactions: Dive deeper into Pipelines and Message Hub](202-pipeline) +- [Using Different Model Sources with Model API](203-model) +- [Enhancing Agent Capabilities with Service Functions](204-service) +- [Memory and Message Management](205-memory) +- [Prompt Engine](206-prompt) +- [Monitoring](207-monitor) +- [Distributed Deployment](208-distribute) ### Getting Involved -* [Joining The AgentScope Community](tutorial/301-community.md) -* [Contributing to AgentScope](tutorial/302-contribute.md) \ No newline at end of file +- [Joining The AgentScope Community](301-community) +- [Contributing to AgentScope](302-contribute)