Skip to content

Commit

Permalink
chore: merge main
Browse files Browse the repository at this point in the history
  • Loading branch information
kaancayli committed Oct 7, 2024
2 parents 4c653ed + 79414c7 commit 005bcfa
Show file tree
Hide file tree
Showing 5 changed files with 507 additions and 17 deletions.
352 changes: 340 additions & 12 deletions README.MD
Original file line number Diff line number Diff line change
@@ -1,16 +1,344 @@
# Pyris V2
## With local environment

### Setup
- Check python version: `python --version` (should be 3.12)
- Install packages: `pip install -r requirements.txt`
Pyris is an intermediary system that connects the [Artemis](https://github.com/ls1intum/Artemis) platform with various Large Language Models (LLMs). It provides a REST API that allows Artemis to interact with different pipelines based on specific tasks.

### Run server
- Run server:
```[bash]
APPLICATION_YML_PATH=<path-to-your-application-yml-file> LLM_CONFIG_PATH=<path-to-your-llm-config-yml> uvicorn app.main:app --reload
```
- Access API docs: http://localhost:8000/docs
Currently, Pyris powers [Iris](https://artemis.cit.tum.de/about-iris), a virtual AI tutor that assists students with their programming exercises on Artemis in a pedagogically meaningful way.

## With docker
TBD
## Table of Contents
- [Features](#features)
- [Setup](#setup)
- [Prerequisites](#prerequisites)
- [Local Development Setup](#local-development-setup)
- [Docker Setup](#docker-setup)
- [Development Environment](#development-environment)
- [Production Environment](#production-environment)
- [Customizing Configuration](#customizing-configuration)
- [Troubleshooting](#troubleshooting)
- [Additional Notes](#additional-notes)

## Features

- **Exercise Support**: Empowers Iris to provide feedback on programming exercises, enhancing the learning experience for students. Iris analyzes submitted code, feedback, and build logs generated by Artemis to provide detailed insights.

- **Course Content Support**: Leverages RAG (Retrieval-Augmented Generation) to enable Iris to provide detailed explanations for course content, making it easier for students to understand complex topics based on instructor-provided learning materials.

- **Competency Generation**: Automates the generation of competencies for courses, reducing manual effort in creating Artemis competencies.

## Setup

### Prerequisites

- **Python 3.12**: Ensure that Python 3.12 is installed.

```bash
python --version
```

- **Docker and Docker Compose**: Required for containerized deployment.

---

### Local Development Setup

> **Note:** If you need to modify the local Weaviate vector database setup, please refer to the [Weaviate Documentation](https://weaviate.io/developers/weaviate/quickstart).

#### Steps

1. **Clone the Pyris Repository**

Clone the Pyris repository into a directory on your machine:

```bash
git clone https://github.com/ls1intum/Pyris.git Pyris
```

2. **Install Dependencies**

Navigate to the Pyris directory and install the required Python packages:

```bash
cd Pyris
pip install -r requirements.txt
```

3. **Create Configuration Files**

- **Create an Application Configuration File**

Create an `application.local.yml` file in the root directory. You can use the provided `application.example.yml` as a base.

```bash
cp application.example.yml application.local.yml
```

**Example `application.local.yml`:**

```yaml
api_keys:
- token: "your-secret-token"
weaviate:
host: "localhost"
port: "8001"
grpc_port: "50051"
env_vars:
```

- **Create an LLM Config File**

Create an `llm-config.local.yml` file in the root directory. You can use the provided `llm-config.example.yml` as a base.

```bash
cp llm-config.example.yml llm-config.local.yml
```

**Example OpenAI Configuration:**

```yaml
- id: "oai-gpt-35-turbo"
name: "GPT 3.5 Turbo"
description: "GPT 3.5 16k"
type: "openai_chat"
model: "gpt-3.5-turbo"
api_key: "<your_openai_api_key>"
tools: []
capabilities:
input_cost: 0.5
output_cost: 1.5
gpt_version_equivalent: 3.5
context_length: 16385
vendor: "OpenAI"
privacy_compliance: false
self_hosted: false
image_recognition: false
json_mode: true
```

**Example Azure OpenAI Configuration:**

```yaml
- id: "azure-gpt-4-omni"
name: "GPT 4 Omni"
description: "GPT 4 Omni on Azure"
type: "azure_chat"
endpoint: "<your_azure_model_endpoint>"
api_version: "2024-02-15-preview"
azure_deployment: "gpt4o"
model: "gpt4o"
api_key: "<your_azure_api_key>"
tools: []
capabilities:
input_cost: 6
output_cost: 16
gpt_version_equivalent: 4.5 # Equivalent GPT version of the model
context_length: 128000
vendor: "OpenAI"
privacy_compliance: true
self_hosted: false
image_recognition: true
json_mode: true
```

**Explanation of Configuration Parameters**

The configuration parameters are used by Pyris's capability system to select the appropriate model for a task.
**Parameter Descriptions:**
- `api_key`: The API key for the model.
- `capabilities`: The capabilities of the model.
- `context_length`: The maximum number of tokens the model can process in a single request.
- `gpt_version_equivalent`: The equivalent GPT version of the model in terms of overall capabilities.
- `image_recognition`: Whether the model supports image recognition (for multimodal models).
- `input_cost`: The cost of input tokens for the model.
- `output_cost`: The cost of output tokens for the model.
- `json_mode`: Whether the model supports structured JSON output mode.
- `privacy_compliance`: Whether the model complies with privacy regulations.
- `self_hosted`: Whether the model is self-hosted.
- `vendor`: The provider of the model (e.g., OpenAI).
- `speed`: The model's processing speed.

- `description`: Additional information about the model.
- `id`: Unique identifier for the model across all models.
- `model`: The official name of the model as used by the vendor.
- `name`: A custom, human-readable name for the model.
- `type`: The model type, used to select the appropriate client (e.g., `openai_chat`, `azure_chat`, `ollama`).
- `endpoint`: The URL to connect to the model.
- `api_version`: The API version to use with the model.
- `azure_deployment`: The deployment name of the model on Azure.
- `tools`: The tools supported by the model.

> **Notes on `gpt_version_equivalent`:** The `gpt_version_equivalent` field is subjective and used to compare capabilities of different models using GPT models as a reference. For example:
>- GPT-4 Omni equivalent: 4.5
>- GPT-4 Omni Mini equivalent: 4.25
>- GPT-4 equivalent: 4
>- GPT-3.5 Turbo equivalent: 3.5

> **Warning:** Most existing pipelines in Pyris require a model with a `gpt_version_equivalent` of **4.5 or higher**. It is advised to define models in the `llm-config.local.yml` file with a `gpt_version_equivalent` of 4.5 or higher.

4. **Run the Server**

Start the Pyris server:

```bash
APPLICATION_YML_PATH=./application.local.yml \
LLM_CONFIG_PATH=./llm-config.local.yml \
uvicorn app.main:app --reload
```

5. **Access API Documentation**

Open your browser and navigate to [http://localhost:8000/docs](http://localhost:8000/docs) to access the interactive API documentation.

---

### Docker Setup

Deploying Pyris using Docker ensures a consistent environment and simplifies the deployment process.

#### Prerequisites

- **Docker**: Install Docker from the [official website](https://www.docker.com/get-started).
- **Docker Compose**: Comes bundled with Docker Desktop or install separately on Linux.
- **Clone the Pyris Repository**: If not already done, clone the repository.
- **Create Configuration Files**: Create the `application.local.yml` and `llm-config.local.yml` files as described in the [Local Development Setup](#local-development-setup) section.

```bash
git clone https://github.com/ls1intum/Pyris.git Pyris
cd Pyris
```

#### Docker Compose Files

- **Development**: `docker-compose/pyris-dev.yml`
- **Production with Nginx**: `docker-compose/pyris-production.yml`
- **Production without Nginx**: `docker-compose/pyris-production-internal.yml`

#### Running the Containers

##### **Development Environment**

1. **Start the Containers**

```bash
docker-compose -f docker-compose/pyris-dev.yml up --build
```

- Builds the Pyris application.
- Starts Pyris and Weaviate in development mode.
- Mounts local configuration files for easy modification.

2. **Access the Application**

- Application URL: [http://localhost:8000](http://localhost:8000)
- API Docs: [http://localhost:8000/docs](http://localhost:8000/docs)

##### **Production Environment**

###### **Option 1: With Nginx**

1. **Prepare SSL Certificates**

- Place your SSL certificate (`fullchain.pem`) and private key (`priv_key.pem`) in the specified paths or update the paths in the Docker Compose file.

2. **Start the Containers**

```bash
docker-compose -f docker-compose/pyris-production.yml up -d
```

- Pulls the latest Pyris image.
- Starts Pyris, Weaviate, and Nginx.
- Nginx handles SSL termination and reverse proxying.

3. **Access the Application**

- Application URL: `https://your-domain.com`

###### **Option 2: Without Nginx**

1. **Start the Containers**

```bash
docker-compose -f docker-compose/pyris-production-internal.yml up -d
```

- Pulls the latest Pyris image.
- Starts Pyris and Weaviate.

2. **Access the Application**

- Application URL: [http://localhost:8000](http://localhost:8000)

---

#### Managing the Containers

- **Stop the Containers**

```bash
docker-compose -f <compose-file> down
```

Replace `<compose-file>` with the appropriate Docker Compose file.

- **View Logs**

```bash
docker-compose -f <compose-file> logs -f <service-name>
```

Example:

```bash
docker-compose -f docker-compose/pyris-dev.yml logs -f pyris-app
```

- **Rebuild Containers**

If you've made changes to the code or configurations:
```bash
docker-compose -f <compose-file> up --build
```
#### Customizing Configuration
- **Environment Variables**
You can customize settings using environment variables:
- `PYRIS_DOCKER_TAG`: Specifies the Pyris Docker image tag.
- `PYRIS_APPLICATION_YML_FILE`: Path to your `application.yml` file.
- `PYRIS_LLM_CONFIG_YML_FILE`: Path to your `llm-config.yml` file.
- `PYRIS_PORT`: Host port for Pyris application (default is `8000`).
- `WEAVIATE_PORT`: Host port for Weaviate REST API (default is `8001`).
- `WEAVIATE_GRPC_PORT`: Host port for Weaviate gRPC interface (default is `50051`).
- **Configuration Files**
Modify configuration files as needed:
- **Pyris Configuration**: Update `application.yml` and `llm-config.yml`.
- **Weaviate Configuration**: Adjust settings in `weaviate.yml`.
- **Nginx Configuration**: Modify Nginx settings in `nginx.yml` and related config files.
## Troubleshooting
- **Port Conflicts**
If you encounter port conflicts, change the host ports using environment variables:
```bash
export PYRIS_PORT=8080
```
- **Permission Issues**
Ensure you have the necessary permissions for files and directories, especially for SSL certificates.
- **Docker Resources**
If services fail to start, ensure Docker has sufficient resources allocated.
7 changes: 6 additions & 1 deletion app/pipeline/competency_extraction_pipeline.py
Original file line number Diff line number Diff line change
Expand Up @@ -31,7 +31,12 @@ def __init__(self, callback: Optional[CompetencyExtractionCallback] = None):
implementation_id="competency_extraction_pipeline_reference_impl"
)
self.callback = callback
self.request_handler = CapabilityRequestHandler(requirements=RequirementList())
self.request_handler = CapabilityRequestHandler(
requirements=RequirementList(
gpt_version_equivalent=4.5,
context_length=16385,
)
)
self.output_parser = PydanticOutputParser(pydantic_object=Competency)

def __call__(
Expand Down
5 changes: 1 addition & 4 deletions application.test.yml → application.example.yml
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,4 @@ api_keys:
weaviate:
host: "localhost"
port: "8001"
grpc_port: "50051"

env_vars:
test: "test"
grpc_port: "50051"
Loading

0 comments on commit 005bcfa

Please sign in to comment.