Skip to content

Commit

Permalink
Removed Steps and Tabs from Docs (#30)
Browse files Browse the repository at this point in the history
  • Loading branch information
mgregerson authored Apr 11, 2024
1 parent 110fa1d commit 4dc391e
Show file tree
Hide file tree
Showing 40 changed files with 611 additions and 676 deletions.
4 changes: 3 additions & 1 deletion fern/docs.yml
Original file line number Diff line number Diff line change
Expand Up @@ -61,7 +61,9 @@ navigation:
path: ./pages/openllmetry/tracing/entities-traces.mdx
- page: Tracking User Feedback
path: ./pages/openllmetry/tracing/tracking-feedback.mdx
- page: Manual Implementations (Typescript / Javascript)
- page: Manually reporting calls to LLMs and Vector DBs
path: ./pages/openllmetry/tracing/manually-reporting-calls.mdx
- page: Issues with Auto-instrumentation (Typescript / Javascript)
path: ./pages/openllmetry/tracing/manual-implementations.mdx
- page: Usage with Threads (Python)
path: ./pages/openllmetry/tracing/usage-threads.mdx
Expand Down
2 changes: 1 addition & 1 deletion fern/fern.config.json
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
{
"organization": "traceloop",
"version": "0.19.30"
"version": "0.20.0"
}
Original file line number Diff line number Diff line change
Expand Up @@ -12,8 +12,8 @@ To disable polling all together, set the `TRACELOOP_SYNC_ENABLED` environment va

Make sure you’ve configured the SDK with the right environment and API Key. See the [SDK documentation](/docs/openllmetry/integrations/traceloop) for more information.

<Callout intent="info">
The SDK uses smart caching mechanisms to proide zero latency for fetching prompts.
<Callout intent="tip">
The SDK uses smart caching mechanisms to provide zero latency for fetching prompts.
</Callout>

## Get Prompt API
Expand Down Expand Up @@ -67,6 +67,6 @@ Then, you can retrieve it with in your code using `get_prompt`:
</CodeBlock>
</CodeBlocks>

<Callout intent="info">
<Callout intent="tip">
The returned variable `prompt_args` is compatible with the API used by the foundation models SDKs (OpenAI, Anthropic, etc.) which means you should directly plug in the response to the appropriate API call.
</Callout>
Original file line number Diff line number Diff line change
Expand Up @@ -15,8 +15,8 @@ The prompt configuration is composed of two parts:
- The prompt template (system and/or user prompts)
- The model configuration (`temperature`, `top_p`, etc.)

<Callout intent="info">
Your prompt template can include variables. Variables are defined according to the syntax of the parser specified. For example, if using `jinjia2` the syntax will be `{{ variable_name }}`. You can then pass variable values to the SDK when calling `get_prompt`. See the example on the [SDK Usage](/fetching-prompts) section.
<Callout intent="tip">
Your prompt template can include variables. Variables are defined according to the syntax of the parser specified. For example, if using `jinjia2` the syntax will be `{{ variable_name }}`. You can then pass variable values to the SDK when calling `get_prompt`. See the example on the [SDK Usage](/docs/documentation/prompt-management/fetching-prompts) section.
</Callout>

Initially, prompts are created in `Draft Mode`. In this mode, you can make changes to the prompt and configuration. You can also test your prompt in the playground (see below).
Expand Down Expand Up @@ -49,7 +49,7 @@ Choose the `Deploy` Tab to navigate to the deployments page for your prompt.

Here, you can see all recent prompt versions, and which environments they are deployed to. Simply click on the `Deploy` button to deploy a prompt version to an environment. Similarly, click `Rollback` to revert to a previous prompt version for a specific environment.

<Callout intent="info">
<Callout intent="note">
As a safeguard, you cannot deploy a prompt to the `Staging` environment before first deploying it to `Development`. Similarly, you cannot deploy to `Production` without first deploying to `Staging`.
</Callout>

Expand All @@ -59,6 +59,6 @@ To fetch prompts from a specific environment, you must supply that environment

If you want to make changes to your prompt after deployment, simply create a new version by clicking on the `New Version` button. New versions will be created in `Draft Mode`.

<Callout intent="warn">
<Callout intent="warning">
If you change the names of variables or add/remove existing variables, you will be required to create a new prompt.
</Callout>
16 changes: 7 additions & 9 deletions fern/pages/documentation/prompt-management/quickstart.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -4,23 +4,22 @@

You can use Traceloop to manage your prompts and model configurations. That way you can easily experiment with different prompts, and rollout changes gradually and safely.

<Callout intent="info">
<Callout intent="note">
Make sure you’ve created an API key and set it as an environment variable `TRACELOOP_API_KEY` before you start. Check out the SDK’s [getting started guide](/docs/openllmetry/quick-start/python) for more information.
</Callout>

<Steps>

### Create a new prompt

Click **New Prompt** to create a new prompt. Give it a name, which will be used to retrieve it in your code later.

### Step 2: Define it in the Prompt Registry
### Define it in the Prompt Registry

Set the system and/or user prompt. You can use variables in your prompt by following the [Jinja format](https://jinja.palletsprojects.com/en/3.1.x/templates/) of `{{ variable_name }}`. The values of these variables will be passed in when you retrieve the prompt in your code.

For more information see the [Registry Documentation](/prompt-registry).
For more information see the [Registry Documentation](/docs/documentation/prompt-management/prompt-registry).

<Callout intent="info" icon="fa-light fa-lightbulb">
<Callout intent="tip">
This screen is also a prompt playground. Give the prompt a try by clicking **Test** at the bottom.
</Callout>

Expand Down Expand Up @@ -91,10 +90,9 @@ Retrieve your prompt by using the `get_prompt` function. For example, if you’v
</CodeBlock>
</CodeBlocks>

<Callout intent="info">
<Callout intent="note">
The returned variable `prompt_args` is compatible with the API used by the foundation models SDKs (OpenAI, Anthropic, etc.) which means you can directly plug in the response to the appropriate API call.
</Callout>

</Steps>

For more information see the [SDK Usage Documentation](/fetching-prompts).
For more information see the [SDK Usage Documentation](/docs/documentation/prompt-management/fetching-prompts).
</Steps>
78 changes: 39 additions & 39 deletions fern/pages/openllmetry/contribute/gen-ai.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -11,42 +11,42 @@ This is a work in progress, and we welcome your feedback and contributions!

## Definitions

<Tabs>
<Tab title="LLM Foundation Models">
| Field | Description |
|------------------------------|--------------------------------------------------------------------------------------------------|
| `llm.vendor` | The vendor of the LLM (e.g. OpenAI, Anthropic, etc.) |
| `llm.request.type` | The type of request (e.g. `completion`, `chat`, etc.) |
| `llm.request.model` | The model requested (e.g. `gpt-4`, `claude`, etc.) |
| `llm.request.functions` | An array of function definitions provided to the model in the request |
| `llm.prompts` | An array of prompts as sent to the LLM model |
| `llm.request.max_tokens` | The maximum number of response tokens requested |
| `llm.response.model` | The model actually used (e.g. `gpt-4-0613`, etc.) |
| `llm.usage.total_tokens` | The total number of tokens used |
| `llm.usage.completion_tokens` | The number of tokens used for the completion response |
| `llm.usage.prompt_tokens` | The number of tokens used for the prompt in the request |
| `llm.completions` | An array of completions returned from the LLM model |
| `llm.temperature` | |
| `llm.top_p` | |
| `llm.frequency_penalty` | |
| `llm.presence_penalty` | |
| `llm.chat.stop_sequences` | |
| `llm.user` | The user ID sent with the request |
| `llm.headers` | The headers used for the request |
</Tab>
<Tab title="Vector DBs">
| Field | Description |
|--------------------------|----------------------------------------------------------------|
| `vector_db.vendor` | The vendor of the Vector DB (e.g. Chroma, Pinecone, etc.) |
| `vector_db.query.top_k` | The top k used for the query |
</Tab>
<Tab title="LLM Frameworks">
| Field | Description |
|---------------------------------|---------------------------------------------------------------------------------------------------|
| `traceloop.span.kind` | One of `workflow`, `task`, `agent`, `tool`. |
| `traceloop.workflow.name` | The name of the parent workflow/chain associated with this span. |
| `traceloop.entity.name` | Framework-related name for the entity (for example, in Langchain, this will be the name of the specific class that defined the chain / subchain). |
| `traceloop.association.properties` | Context on the request (relevant User ID, Chat ID, etc.). |

</Tab>
</Tabs>
### LLM Foundation Models

| Field | Description |
|------------------------------|--------------------------------------------------------------------------------------------------|
| `llm.vendor` | The vendor of the LLM (e.g. OpenAI, Anthropic, etc.) |
| `llm.request.type` | The type of request (e.g. `completion`, `chat`, etc.) |
| `llm.request.model` | The model requested (e.g. `gpt-4`, `claude`, etc.) |
| `llm.request.functions` | An array of function definitions provided to the model in the request |
| `llm.prompts` | An array of prompts as sent to the LLM model |
| `llm.request.max_tokens` | The maximum number of response tokens requested |
| `llm.response.model` | The model actually used (e.g. `gpt-4-0613`, etc.) |
| `llm.usage.total_tokens` | The total number of tokens used |
| `llm.usage.completion_tokens` | The number of tokens used for the completion response |
| `llm.usage.prompt_tokens` | The number of tokens used for the prompt in the request |
| `llm.completions` | An array of completions returned from the LLM model |
| `llm.temperature` | |
| `llm.top_p` | |
| `llm.frequency_penalty` | |
| `llm.presence_penalty` | |
| `llm.chat.stop_sequences` | |
| `llm.user` | The user ID sent with the request |
| `llm.headers` | The headers used for the request |

### Vector DBs

| Field | Description |
|--------------------------|----------------------------------------------------------------|
| `vector_db.vendor` | The vendor of the Vector DB (e.g. Chroma, Pinecone, etc.) |
| `vector_db.query.top_k` | The top k used for the query |

### LLM Frameworks

| Field | Description |
|---------------------------------|---------------------------------------------------------------------------------------------------|
| `traceloop.span.kind` | One of `workflow`, `task`, `agent`, `tool`. |
| `traceloop.workflow.name` | The name of the parent workflow/chain associated with this span. |
| `traceloop.entity.name` | Framework-related name for the entity (for example, in Langchain, this will be the name of the specific class that defined the chain / subchain). |
| `traceloop.association.properties` | Context on the request (relevant User ID, Chat ID, etc.). |

65 changes: 32 additions & 33 deletions fern/pages/openllmetry/contribute/local-development.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -20,48 +20,47 @@ To run a specific command on a specific package, run:
nx run <package>:<command>
```

<Tabs>
<Tab title="Python">
We use `poetry` to manage packages, and each package is managed independently under its own directory under `/packages`. All instrumentations depends on `opentelemetry-semantic-conventions-ai`, and `traceloop-sdk` depends on all the instrumentations.
## Python

If adding a new instrumentation, make sure to use it in `traceloop-sdk`, and write proper tests.
We use `poetry` to manage packages, and each package is managed independently under its own directory under `/packages`. All instrumentations depends on `opentelemetry-semantic-conventions-ai`, and `traceloop-sdk` depends on all the instrumentations.

### Debugging
If adding a new instrumentation, make sure to use it in `traceloop-sdk`, and write proper tests.

No matter if you’re working on an instrumentation or on the SDK, we recommend testing the changes by using the SDK in the sample app (`/packages/sample-app`) or the tests under the SDK.
### Debugging

### Running tests
No matter if you’re working on an instrumentation or on the SDK, we recommend testing the changes by using the SDK in the sample app (`/packages/sample-app`) or the tests under the SDK.

We record HTTP requests and then replay them in tests to avoid making actual calls to the foundation model providers. See [vcr.py](https://github.com/kevin1024/vcrpy) and [pollyjs](https://github.com/Netflix/pollyjs/) to do that, check out their documentation to understand how to use them and re-record the requests.
### Running tests

You can run all tests by running:
We record HTTP requests and then replay them in tests to avoid making actual calls to the foundation model providers. See [vcr.py](https://github.com/kevin1024/vcrpy) and [pollyjs](https://github.com/Netflix/pollyjs/) to do that, check out their documentation to understand how to use them and re-record the requests.

```bash
nx run-many -t test
```
You can run all tests by running:

```bash
nx run-many -t test
```

Or run a specific test by running:

Or run a specific test by running:
```bash
nx run <package>:test
```

For example, to run the tests for the openai instrumentation package, run:

<CodeBlocks>
<CodeBlock title="Python">
```bash
nx run <package>:test
nx run opentelemetry-instrumentation-openai:test
```
</CodeBlock>
<CodeBlock title="Typescript">
```bash
nx run @traceloop/instrumentation-openai:test
```
</CodeBlock>
</CodeBlocks>

## Typescript

For example, to run the tests for the openai instrumentation package, run:

<CodeBlocks>
<CodeBlock title="Python">
```bash
nx run opentelemetry-instrumentation-openai:test
```
</CodeBlock>
<CodeBlock title="Typescript">
```bash
nx run @traceloop/instrumentation-openai:test
```
</CodeBlock>
</CodeBlocks>
</Tab>
<Tab title="Typescript">
We use `npm` with workspaces to manage packages in the monorepo. Install by running `npm install` in the root of the project. Each package has its own test suite. You can use the sample app to run and test changes locally.
</Tab>
</Tabs>
We use `npm` with workspaces to manage packages in the monorepo. Install by running `npm install` in the root of the project. Each package has its own test suite. You can use the sample app to run and test changes locally.
4 changes: 3 additions & 1 deletion fern/pages/openllmetry/contribute/overview.mdx
Original file line number Diff line number Diff line change
@@ -1,4 +1,6 @@
<h1>We welcome any contributions to OpenLLMetry, big or small.</h1>
---
excerpt: We welcome any contributions to OpenLLMetry, big or small
---

## Community

Expand Down
2 changes: 1 addition & 1 deletion fern/pages/openllmetry/integrations/axiom.mdx
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
---
excerpt: LLM Observability with Axiom and OpenLLMetry
title: LLM Observability with Axiom and OpenLLMetry
---

<Frame>
Expand Down
13 changes: 5 additions & 8 deletions fern/pages/openllmetry/integrations/azure-insights.mdx
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
---
excerpt: Azure Application Insights
title: Azure Application Insights
---

Traceloop supports sending traces to Azure Application Insights via standard OpenTelemetry integrations.
Expand All @@ -10,17 +10,15 @@ Review how to setup [OpenTelemetry with Python in Azure Application Insights](ht
![integrations-azure](https://fern-image-hosting.s3.amazonaws.com/traceloop/integrations-azure.png)
</Frame>

<Steps>
1. Provision an Application Insights instance in the [Azure portal](https://portal.azure.com/).

### Provision an Application Insights instance in the [Azure portal](https://portal.azure.com/).
2. Get your Connection String from the instance - [details here](https://learn.microsoft.com/en-us/azure/azure-monitor/app/sdk-connection-string?tabs=python).

### Get your Connection String from the instance - [details here](https://learn.microsoft.com/en-us/azure/azure-monitor/app/sdk-connection-string?tabs=python).

### Install required packages
3. Install required packages

`pip install azure-monitor-opentelemetry-exporter traceloop-sdk openai`

### Example implementation
4. Example implementation

```python
import os
Expand Down Expand Up @@ -87,4 +85,3 @@ def joke_workflow():
if __name__ == "__main__":
joke_workflow()
```
</Steps>
2 changes: 1 addition & 1 deletion fern/pages/openllmetry/integrations/datadog.mdx
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
---
excerpt: LLM Observability with Datadog and OpenLLMetry
title: LLM Observability with Datadog and OpenLLMetry
---

With datadog, there are 2 options - you can either export directly to a Datadog Agent in your cluster, or through an OpenTelemetry Collector (which requires that you deploy one in your cluster).
Expand Down
Loading

0 comments on commit 4dc391e

Please sign in to comment.