Skip to content

Commit

Permalink
sync repo
Browse files Browse the repository at this point in the history
  • Loading branch information
yanxi0830 committed Sep 17, 2024
1 parent 8acf430 commit f567252
Show file tree
Hide file tree
Showing 122 changed files with 4,675 additions and 7,737 deletions.
8 changes: 4 additions & 4 deletions CONTRIBUTING.md
Original file line number Diff line number Diff line change
Expand Up @@ -31,13 +31,13 @@ $ pip install -r requirements-dev.lock

## Modifying/Adding code

Most of the SDK is generated code, and any modified code will be overridden on the next generation. The
`src/llama_stack/lib/` and `examples/` directories are exceptions and will never be overridden.
Most of the SDK is generated code. Modifications to code will be persisted between generations, but may
result in merge conflicts between manual patches and changes from the generator. The generator will never
modify the contents of the `src/llama_stack/lib/` and `examples/` directories.

## Adding and running examples

All files in the `examples/` directory are not modified by the Stainless generator and can be freely edited or
added to.
All files in the `examples/` directory are not modified by the generator and can be freely edited or added to.

```bash
# add an example to examples/<your-example>.py
Expand Down
67 changes: 27 additions & 40 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,8 @@ The REST API documentation can be found on [docs.llama-stack.todo](https://docs.
## Installation

```sh
pip install llama-stack
# install from this staging repo
pip install llama-stack-client
```

## Usage
Expand All @@ -30,13 +31,11 @@ client = LlamaStack(
environment="sandbox",
)

agentic_system_create_response = client.agentic_system.create(
agent_config={
"instructions": "instructions",
"model": "model",
},
session = client.agentic_system.sessions.create(
agent_id="agent_id",
session_name="session_name",
)
print(agentic_system_create_response.agent_id)
print(session.session_id)
```

## Async usage
Expand All @@ -54,13 +53,11 @@ client = AsyncLlamaStack(


async def main() -> None:
agentic_system_create_response = await client.agentic_system.create(
agent_config={
"instructions": "instructions",
"model": "model",
},
session = await client.agentic_system.sessions.create(
agent_id="agent_id",
session_name="session_name",
)
print(agentic_system_create_response.agent_id)
print(session.session_id)


asyncio.run(main())
Expand Down Expand Up @@ -93,11 +90,9 @@ from llama_stack import LlamaStack
client = LlamaStack()

try:
client.agentic_system.create(
agent_config={
"instructions": "instructions",
"model": "model",
},
client.agentic_system.sessions.create(
agent_id="agent_id",
session_name="session_name",
)
except llama_stack.APIConnectionError as e:
print("The server could not be reached")
Expand Down Expand Up @@ -141,11 +136,9 @@ client = LlamaStack(
)

# Or, configure per-request:
client.with_options(max_retries=5).agentic_system.create(
agent_config={
"instructions": "instructions",
"model": "model",
},
client.with_options(max_retries=5).agentic_system.sessions.create(
agent_id="agent_id",
session_name="session_name",
)
```

Expand All @@ -169,11 +162,9 @@ client = LlamaStack(
)

# Override per-request:
client.with_options(timeout=5.0).agentic_system.create(
agent_config={
"instructions": "instructions",
"model": "model",
},
client.with_options(timeout=5.0).agentic_system.sessions.create(
agent_id="agent_id",
session_name="session_name",
)
```

Expand Down Expand Up @@ -213,16 +204,14 @@ The "raw" Response object can be accessed by prefixing `.with_raw_response.` to
from llama_stack import LlamaStack

client = LlamaStack()
response = client.agentic_system.with_raw_response.create(
agent_config={
"instructions": "instructions",
"model": "model",
},
response = client.agentic_system.sessions.with_raw_response.create(
agent_id="agent_id",
session_name="session_name",
)
print(response.headers.get('X-My-Header'))

agentic_system = response.parse() # get the object that `agentic_system.create()` would have returned
print(agentic_system.agent_id)
session = response.parse() # get the object that `agentic_system.sessions.create()` would have returned
print(session.session_id)
```

These methods return an [`APIResponse`](https://github.com/stainless-sdks/llama-stack-python/tree/main/src/llama_stack/_response.py) object.
Expand All @@ -236,11 +225,9 @@ The above interface eagerly reads the full response body when you make the reque
To stream the response body, use `.with_streaming_response` instead, which requires a context manager and only reads the response body once you call `.read()`, `.text()`, `.json()`, `.iter_bytes()`, `.iter_text()`, `.iter_lines()` or `.parse()`. In the async client, these are async methods.

```python
with client.agentic_system.with_streaming_response.create(
agent_config={
"instructions": "instructions",
"model": "model",
},
with client.agentic_system.sessions.with_streaming_response.create(
agent_id="agent_id",
session_name="session_name",
) as response:
print(response.headers.get("X-My-Header"))

Expand Down
93 changes: 26 additions & 67 deletions api.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,11 +2,9 @@

```python
from llama_stack.types import (
Artifact,
Attachment,
BatchCompletion,
CompletionMessage,
Run,
SamplingParams,
SystemMessage,
ToolCall,
Expand All @@ -15,13 +13,29 @@ from llama_stack.types import (
)
```

# Telemetry

Types:

```python
from llama_stack.types import TelemetryGetTraceResponse
```

Methods:

- <code title="get /telemetry/get_trace">client.telemetry.<a href="./src/llama_stack/resources/telemetry.py">get_trace</a>(\*\*<a href="src/llama_stack/types/telemetry_get_trace_params.py">params</a>) -> <a href="./src/llama_stack/types/telemetry_get_trace_response.py">TelemetryGetTraceResponse</a></code>
- <code title="post /telemetry/log_event">client.telemetry.<a href="./src/llama_stack/resources/telemetry.py">log</a>(\*\*<a href="src/llama_stack/types/telemetry_log_params.py">params</a>) -> None</code>

# AgenticSystem

Types:

```python
from llama_stack.types import (
CustomQueryGeneratorConfig,
DefaultQueryGeneratorConfig,
InferenceStep,
LlmQueryGeneratorConfig,
MemoryRetrievalStep,
RestAPIExecutionConfig,
ShieldCallStep,
Expand Down Expand Up @@ -68,20 +82,14 @@ Methods:
Types:

```python
from llama_stack.types.agentic_system import AgenticSystemTurnStreamChunk, Turn
from llama_stack.types.agentic_system import AgenticSystemTurnStreamChunk, Turn, TurnStreamEvent
```

Methods:

- <code title="post /agentic_system/turn/create">client.agentic_system.turns.<a href="./src/llama_stack/resources/agentic_system/turns.py">create</a>(\*\*<a href="src/llama_stack/types/agentic_system/turn_create_params.py">params</a>) -> <a href="./src/llama_stack/types/agentic_system/agentic_system_turn_stream_chunk.py">AgenticSystemTurnStreamChunk</a></code>
- <code title="get /agentic_system/turn/get">client.agentic_system.turns.<a href="./src/llama_stack/resources/agentic_system/turns.py">retrieve</a>(\*\*<a href="src/llama_stack/types/agentic_system/turn_retrieve_params.py">params</a>) -> <a href="./src/llama_stack/types/agentic_system/turn.py">Turn</a></code>

# Artifacts

Methods:

- <code title="get /artifacts/get">client.artifacts.<a href="./src/llama_stack/resources/artifacts.py">get</a>(\*\*<a href="src/llama_stack/types/artifact_get_params.py">params</a>) -> <a href="./src/llama_stack/types/shared/artifact.py">Artifact</a></code>

# Datasets

Types:
Expand Down Expand Up @@ -152,41 +160,24 @@ Methods:
- <code title="post /evaluate/summarization/">client.evaluations.<a href="./src/llama_stack/resources/evaluations.py">summarization</a>(\*\*<a href="src/llama_stack/types/evaluation_summarization_params.py">params</a>) -> <a href="./src/llama_stack/types/evaluation_job.py">EvaluationJob</a></code>
- <code title="post /evaluate/text_generation/">client.evaluations.<a href="./src/llama_stack/resources/evaluations.py">text_generation</a>(\*\*<a href="src/llama_stack/types/evaluation_text_generation_params.py">params</a>) -> <a href="./src/llama_stack/types/evaluation_job.py">EvaluationJob</a></code>

# Experiments

Types:

```python
from llama_stack.types import Experiment
```

Methods:

- <code title="post /experiments/create">client.experiments.<a href="./src/llama_stack/resources/experiments/experiments.py">create</a>(\*\*<a href="src/llama_stack/types/experiment_create_params.py">params</a>) -> <a href="./src/llama_stack/types/experiment.py">Experiment</a></code>
- <code title="get /experiments/get">client.experiments.<a href="./src/llama_stack/resources/experiments/experiments.py">retrieve</a>(\*\*<a href="src/llama_stack/types/experiment_retrieve_params.py">params</a>) -> <a href="./src/llama_stack/types/experiment.py">Experiment</a></code>
- <code title="post /experiments/update">client.experiments.<a href="./src/llama_stack/resources/experiments/experiments.py">update</a>(\*\*<a href="src/llama_stack/types/experiment_update_params.py">params</a>) -> <a href="./src/llama_stack/types/experiment.py">Experiment</a></code>
- <code title="get /experiments/list">client.experiments.<a href="./src/llama_stack/resources/experiments/experiments.py">list</a>() -> <a href="./src/llama_stack/types/experiment.py">Experiment</a></code>
- <code title="post /experiments/create_run">client.experiments.<a href="./src/llama_stack/resources/experiments/experiments.py">create_run</a>(\*\*<a href="src/llama_stack/types/experiment_create_run_params.py">params</a>) -> <a href="./src/llama_stack/types/shared/run.py">Run</a></code>

## Artifacts

Methods:

- <code title="post /experiments/artifacts/get">client.experiments.artifacts.<a href="./src/llama_stack/resources/experiments/artifacts.py">retrieve</a>(\*\*<a href="src/llama_stack/types/experiments/artifact_retrieve_params.py">params</a>) -> <a href="./src/llama_stack/types/shared/artifact.py">Artifact</a></code>
- <code title="post /experiments/artifacts/upload">client.experiments.artifacts.<a href="./src/llama_stack/resources/experiments/artifacts.py">upload</a>(\*\*<a href="src/llama_stack/types/experiments/artifact_upload_params.py">params</a>) -> <a href="./src/llama_stack/types/shared/artifact.py">Artifact</a></code>

# Inference

Types:

```python
from llama_stack.types import ChatCompletionStreamChunk, CompletionStreamChunk
from llama_stack.types import (
ChatCompletionStreamChunk,
CompletionStreamChunk,
TokenLogProbs,
InferenceChatCompletionResponse,
InferenceCompletionResponse,
)
```

Methods:

- <code title="post /inference/chat_completion">client.inference.<a href="./src/llama_stack/resources/inference/inference.py">chat_completion</a>(\*\*<a href="src/llama_stack/types/inference_chat_completion_params.py">params</a>) -> <a href="./src/llama_stack/types/chat_completion_stream_chunk.py">ChatCompletionStreamChunk</a></code>
- <code title="post /inference/completion">client.inference.<a href="./src/llama_stack/resources/inference/inference.py">completion</a>(\*\*<a href="src/llama_stack/types/inference_completion_params.py">params</a>) -> <a href="./src/llama_stack/types/completion_stream_chunk.py">CompletionStreamChunk</a></code>
- <code title="post /inference/chat_completion">client.inference.<a href="./src/llama_stack/resources/inference/inference.py">chat_completion</a>(\*\*<a href="src/llama_stack/types/inference_chat_completion_params.py">params</a>) -> <a href="./src/llama_stack/types/inference_chat_completion_response.py">InferenceChatCompletionResponse</a></code>
- <code title="post /inference/completion">client.inference.<a href="./src/llama_stack/resources/inference/inference.py">completion</a>(\*\*<a href="src/llama_stack/types/inference_completion_params.py">params</a>) -> <a href="./src/llama_stack/types/inference_completion_response.py">InferenceCompletionResponse</a></code>

## Embeddings

Expand All @@ -200,19 +191,6 @@ Methods:

- <code title="post /inference/embeddings">client.inference.embeddings.<a href="./src/llama_stack/resources/inference/embeddings.py">create</a>(\*\*<a href="src/llama_stack/types/inference/embedding_create_params.py">params</a>) -> <a href="./src/llama_stack/types/inference/embeddings.py">Embeddings</a></code>

# Logging

Types:

```python
from llama_stack.types import LoggingGetLogsResponse
```

Methods:

- <code title="post /logging/get_logs">client.logging.<a href="./src/llama_stack/resources/logging.py">get_logs</a>(\*\*<a href="src/llama_stack/types/logging_get_logs_params.py">params</a>) -> <a href="./src/llama_stack/types/logging_get_logs_response.py">LoggingGetLogsResponse</a></code>
- <code title="post /logging/log_messages">client.logging.<a href="./src/llama_stack/resources/logging.py">log_messages</a>(\*\*<a href="src/llama_stack/types/logging_log_messages_params.py">params</a>) -> None</code>

# Safety

Types:
Expand Down Expand Up @@ -307,25 +285,6 @@ Methods:

- <code title="post /reward_scoring/score">client.reward_scoring.<a href="./src/llama_stack/resources/reward_scoring.py">score</a>(\*\*<a href="src/llama_stack/types/reward_scoring_score_params.py">params</a>) -> <a href="./src/llama_stack/types/reward_scoring.py">RewardScoring</a></code>

# Runs

Methods:

- <code title="post /runs/update">client.runs.<a href="./src/llama_stack/resources/runs/runs.py">update</a>(\*\*<a href="src/llama_stack/types/run_update_params.py">params</a>) -> <a href="./src/llama_stack/types/shared/run.py">Run</a></code>
- <code title="post /runs/log_metrics">client.runs.<a href="./src/llama_stack/resources/runs/runs.py">log_metrics</a>(\*\*<a href="src/llama_stack/types/run_log_metrics_params.py">params</a>) -> None</code>

## Metrics

Types:

```python
from llama_stack.types.runs import MetricListResponse
```

Methods:

- <code title="get /runs/metrics">client.runs.metrics.<a href="./src/llama_stack/resources/runs/metrics.py">list</a>(\*\*<a href="src/llama_stack/types/runs/metric_list_params.py">params</a>) -> <a href="./src/llama_stack/types/runs/metric_list_response.py">MetricListResponse</a></code>

# SyntheticDataGeneration

Types:
Expand Down
4 changes: 2 additions & 2 deletions pyproject.toml
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
[project]
name = "llama_stack"
version = "0.0.1-alpha.4"
name = "llama_stack_client"
version = "0.0.1-alpha.0"
description = "The official Python library for the llama-stack API"
dynamic = ["readme"]
license = "Apache-2.0"
Expand Down
6 changes: 3 additions & 3 deletions requirements-dev.lock
Original file line number Diff line number Diff line change
Expand Up @@ -49,7 +49,7 @@ markdown-it-py==3.0.0
# via rich
mdurl==0.1.2
# via markdown-it-py
mypy==1.10.1
mypy==1.11.2
mypy-extensions==1.0.0
# via mypy
nodeenv==1.8.0
Expand All @@ -70,7 +70,7 @@ pydantic-core==2.18.2
# via pydantic
pygments==2.18.0
# via rich
pyright==1.1.374
pyright==1.1.380
pytest==7.1.1
# via pytest-asyncio
pytest-asyncio==0.21.1
Expand All @@ -80,7 +80,7 @@ pytz==2023.3.post1
# via dirty-equals
respx==0.20.2
rich==13.7.1
ruff==0.5.6
ruff==0.6.5
setuptools==68.2.2
# via nodeenv
six==1.16.0
Expand Down
Loading

0 comments on commit f567252

Please sign in to comment.