Skip to content

Commit

Permalink
Merge branch 'BerriAI:main' into main
Browse files Browse the repository at this point in the history
  • Loading branch information
hughcrt authored Jun 13, 2024
2 parents 9927d1e + eeb0e7d commit 03f63b9
Show file tree
Hide file tree
Showing 90 changed files with 202,887 additions and 355 deletions.
1 change: 1 addition & 0 deletions .circleci/config.yml
Original file line number Diff line number Diff line change
Expand Up @@ -202,6 +202,7 @@ jobs:
-e REDIS_PORT=$REDIS_PORT \
-e AZURE_FRANCE_API_KEY=$AZURE_FRANCE_API_KEY \
-e AZURE_EUROPE_API_KEY=$AZURE_EUROPE_API_KEY \
-e MISTRAL_API_KEY=$MISTRAL_API_KEY \
-e AWS_ACCESS_KEY_ID=$AWS_ACCESS_KEY_ID \
-e AWS_SECRET_ACCESS_KEY=$AWS_SECRET_ACCESS_KEY \
-e AWS_REGION_NAME=$AWS_REGION_NAME \
Expand Down
8 changes: 8 additions & 0 deletions docs/my-website/docs/observability/raw_request_response.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,6 +4,7 @@ import Image from '@theme/IdealImage';

See the raw request/response sent by LiteLLM in your logging provider (OTEL/Langfuse/etc.).

**on SDK**
```python
# pip install langfuse
import litellm
Expand Down Expand Up @@ -33,6 +34,13 @@ response = litellm.completion(
)
```

**on Proxy**

```yaml
litellm_settings:
log_raw_request_response: True
```
**Expected Log**
<Image img={require('../../img/raw_request_log.png')}/>
4 changes: 3 additions & 1 deletion docs/my-website/docs/providers/azure.md
Original file line number Diff line number Diff line change
Expand Up @@ -68,6 +68,7 @@ response = litellm.completion(

| Model Name | Function Call |
|------------------|----------------------------------------|
| gpt-4o | `completion('azure/<your deployment name>', messages)` |
| gpt-4 | `completion('azure/<your deployment name>', messages)` |
| gpt-4-0314 | `completion('azure/<your deployment name>', messages)` |
| gpt-4-0613 | `completion('azure/<your deployment name>', messages)` |
Expand All @@ -85,7 +86,8 @@ response = litellm.completion(
## Azure OpenAI Vision Models
| Model Name | Function Call |
|-----------------------|-----------------------------------------------------------------|
| gpt-4-vision | `response = completion(model="azure/<your deployment name>", messages=messages)` |
| gpt-4-vision | `completion(model="azure/<your deployment name>", messages=messages)` |
| gpt-4o | `completion('azure/<your deployment name>', messages)` |

#### Usage
```python
Expand Down
163 changes: 131 additions & 32 deletions docs/my-website/docs/providers/azure_ai.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,53 +3,155 @@ import TabItem from '@theme/TabItem';

# Azure AI Studio

**Ensure the following:**
1. The API Base passed ends in the `/v1/` prefix
example:
```python
api_base = "https://Mistral-large-dfgfj-serverless.eastus2.inference.ai.azure.com/v1/"
```
LiteLLM supports all models on Azure AI Studio

2. The `model` passed is listed in [supported models](#supported-models). You **DO NOT** Need to pass your deployment name to litellm. Example `model=azure/Mistral-large-nmefg`

## Usage

<Tabs>
<TabItem value="sdk" label="SDK">

### ENV VAR
```python
import os
os.environ["AZURE_API_API_KEY"] = ""
os.environ["AZURE_AI_API_BASE"] = ""
```

### Example Call

```python
import litellm
response = litellm.completion(
model="azure/command-r-plus",
api_base="<your-deployment-base>/v1/"
api_key="eskk******"
messages=[{"role": "user", "content": "What is the meaning of life?"}],
from litellm import completion
import os
## set ENV variables
os.environ["AZURE_API_API_KEY"] = "azure ai key"
os.environ["AZURE_AI_API_BASE"] = "azure ai base url" # e.g.: https://Mistral-large-dfgfj-serverless.eastus2.inference.ai.azure.com/

# predibase llama-3 call
response = completion(
model="azure_ai/command-r-plus",
messages = [{ "content": "Hello, how are you?","role": "user"}]
)
```

</TabItem>
<TabItem value="proxy" label="PROXY">

## Sample Usage - LiteLLM Proxy

1. Add models to your config.yaml

```yaml
model_list:
- model_name: mistral
litellm_params:
model: azure/mistral-large-latest
api_base: https://Mistral-large-dfgfj-serverless.eastus2.inference.ai.azure.com/v1/
api_key: JGbKodRcTp****
- model_name: command-r-plus
litellm_params:
model: azure/command-r-plus
api_key: os.environ/AZURE_COHERE_API_KEY
api_base: os.environ/AZURE_COHERE_API_BASE
model: azure_ai/command-r-plus
api_key: os.environ/AZURE_AI_API_KEY
api_base: os.environ/AZURE_AI_API_BASE
```
2. Start the proxy
```bash
$ litellm --config /path/to/config.yaml --debug
```

3. Send Request to LiteLLM Proxy Server

<Tabs>

<TabItem value="openai" label="OpenAI Python v1.0.0+">

```python
import openai
client = openai.OpenAI(
api_key="sk-1234", # pass litellm proxy key, if you're using virtual keys
base_url="http://0.0.0.0:4000" # litellm-proxy-base url
)

response = client.chat.completions.create(
model="command-r-plus",
messages = [
{
"role": "system",
"content": "Be a good human!"
},
{
"role": "user",
"content": "What do you know about earth?"
}
]
)

print(response)
```

</TabItem>

<TabItem value="curl" label="curl">

```shell
curl --location 'http://0.0.0.0:4000/chat/completions' \
--header 'Authorization: Bearer sk-1234' \
--header 'Content-Type: application/json' \
--data '{
"model": "command-r-plus",
"messages": [
{
"role": "system",
"content": "Be a good human!"
},
{
"role": "user",
"content": "What do you know about earth?"
}
],
}'
```
</TabItem>

</Tabs>


</TabItem>

</Tabs>

## Passing additional params - max_tokens, temperature
See all litellm.completion supported params [here](../completion/input.md#translated-openai-params)

```python
# !pip install litellm
from litellm import completion
import os
## set ENV variables
os.environ["AZURE_AI_API_KEY"] = "azure ai api key"
os.environ["AZURE_AI_API_BASE"] = "azure ai api base"

# command r plus call
response = completion(
model="azure_ai/command-r-plus",
messages = [{ "content": "Hello, how are you?","role": "user"}],
max_tokens=20,
temperature=0.5
)
```

**proxy**

```yaml
model_list:
- model_name: command-r-plus
litellm_params:
model: azure_ai/command-r-plus
api_key: os.environ/AZURE_AI_API_KEY
api_base: os.environ/AZURE_AI_API_BASE
max_tokens: 20
temperature: 0.5
```
2. Start the proxy
```bash
Expand Down Expand Up @@ -103,9 +205,6 @@ response = litellm.completion(

</Tabs>

</TabItem>
</Tabs>

## Function Calling

<Tabs>
Expand All @@ -115,8 +214,8 @@ response = litellm.completion(
from litellm import completion

# set env
os.environ["AZURE_MISTRAL_API_KEY"] = "your-api-key"
os.environ["AZURE_MISTRAL_API_BASE"] = "your-api-base"
os.environ["AZURE_AI_API_KEY"] = "your-api-key"
os.environ["AZURE_AI_API_BASE"] = "your-api-base"

tools = [
{
Expand All @@ -141,9 +240,7 @@ tools = [
messages = [{"role": "user", "content": "What's the weather like in Boston today?"}]

response = completion(
model="azure/mistral-large-latest",
api_base=os.getenv("AZURE_MISTRAL_API_BASE")
api_key=os.getenv("AZURE_MISTRAL_API_KEY")
model="azure_ai/mistral-large-latest",
messages=messages,
tools=tools,
tool_choice="auto",
Expand Down Expand Up @@ -206,10 +303,12 @@ curl http://0.0.0.0:4000/v1/chat/completions \

## Supported Models

LiteLLM supports **ALL** azure ai models. Here's a few examples:

| Model Name | Function Call |
|--------------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| Cohere command-r-plus | `completion(model="azure/command-r-plus", messages)` |
| Cohere ommand-r | `completion(model="azure/command-r", messages)` |
| Cohere command-r | `completion(model="azure/command-r", messages)` |
| mistral-large-latest | `completion(model="azure/mistral-large-latest", messages)` |


Loading

0 comments on commit 03f63b9

Please sign in to comment.