Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Test new pages #15

Open
wants to merge 2 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
61 changes: 61 additions & 0 deletions docs/Endpoints/ChatCompletionAPI.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,61 @@
---
sidebar_position: 1
id: chat-completion-api
title: Chat Completion API
tags:
- OpenAI API
- Chat Models
---

import Tabs from '@theme/Tabs';
import TabItem from '@theme/TabItem';

## Chat Completion API

**The role and the content of the prompt for each role is required.**

<Tabs>
<TabItem value="py" label="Python" default>
```python showLineNumbers
model = "Llama-3.1-70B-Instruct" # choose one of available LLM (not the embedding model)
stream = True

chat_response = client.chat.completions.create(
model=model,
messages=[
{"role": "system", "content": "You are a helpful assistant named Llama-3."},
{"role": "user", "content": "What is Open Telekom Cloud?"},
],
temperature=0.1,
max_tokens=256,
stream=stream
)

if not stream:
print(chat_response.choices[0].message.content)
else:
for chunk in chat_response:
if chunk.choices:
if chunk.choices[0].delta.content is not None:
print(chunk.choices[0].delta.content, end="", flush=True)
```
</TabItem>

<TabItem value="curl" label="cURL">
```bash showLineNumbers
curl -X POST https://llm-server.llmhub.t-systems.net/v2/chat/completions \
-H "Authorization: Bearer YOUR_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"model": "Llama-3.1-70B-Instruct",
"messages": [
{"role": "system", "content": "You are a helpful assistant named Llama-3."},
{"role": "user", "content": "What is Open Telekom Cloud?"}
],
"temperature": 0.1,
"max_tokens": 256,
"stream": true
}'
```
</TabItem>
</Tabs>
56 changes: 56 additions & 0 deletions docs/Endpoints/CompletionAPI.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,56 @@
---
sidebar_position: 2
id: completion-api
title: Completion API
tags:
- OpenAI API
- Text Models
---

import Tabs from '@theme/Tabs';
import TabItem from '@theme/TabItem';

## Completion API

**With this API, the raw text will be sent directly to the LLM without a special tag template.**

<Tabs>
<TabItem value="py" label="Python" default>
```python showLineNumbers
model = "Llama-3.1-70B-Instruct" # choose one of the available LLMs (not the embedding model)
stream = True

completion = client.completions.create(
model=model,
prompt="What is Python programming language?",
stream=stream,
temperature=0.2,
max_tokens=128
)

if not stream:
print(completion.choices[0].text)

else:
for chunk in completion:
if chunk.choices:
if chunk.choices[0].text is not None:
print(chunk.choices[0].text, end="", flush=True)
```
</TabItem>

<TabItem value="curl" label="cURL">
```bash showLineNumbers
curl -X POST https://llm-server.llmhub.t-systems.net/v2/completions \
-H "Authorization: Bearer YOUR_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"model": "Llama-3.1-70B-Instruct",
"prompt": "What is Python programming language?",
"temperature": 0.2,
"max_tokens": 128,
"stream": true
}'
```
</TabItem>
</Tabs>
42 changes: 42 additions & 0 deletions docs/Endpoints/EmbeddingAPI.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,42 @@
---
sidebar_position: 3
id: embedding-api
title: Embedding API
tags:
- OpenAI API
- Embedding Models
---

import Tabs from '@theme/Tabs';
import TabItem from '@theme/TabItem';

## Embedding API

<Tabs>
<TabItem value="py" label="Python" default>
```python showLineNumbers
model = "jina-embeddings-v2-base-de"

texts = ["I am Batman and I'm rich", "I am Spiderman", "I am Ironman and I'm a billionaire", "I am Flash", "I am the president of USA"]
embeddings = client.embeddings.create(
input=texts, model=model
)

print('Embedding dimension: ', len(embeddings.data[0].embedding))
print('Number of embedding vectors: ', len(embeddings.data))
print('Token usage: ', embeddings.usage)
```
</TabItem>

<TabItem value="curl" label="cURL">
```bash showLineNumbers
curl -X POST https://llm-server.llmhub.t-systems.net/v2/embeddings \
-H "Authorization: Bearer YOUR_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"model": "jina-embeddings-v2-base-de",
"input": ["I am Batman and I'm rich", "I am Spiderman", "I am Ironman and I'm a billionaire", "I am Flash", "I am the president of USA"]
}'
```
</TabItem>
</Tabs>
8 changes: 8 additions & 0 deletions docs/Endpoints/_category_.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,8 @@
{
"label": "Endpoints",
"position": 5,
"link": {
"type": "generated-index",
"description": "List of available Endpoints in AI Foundation Sevices"
}
}
Loading