Skip to content

Commit

Permalink
Revert "Webhooks release (#2869)" (#2871)
Browse files Browse the repository at this point in the history
This reverts commit 15035fc.
  • Loading branch information
chitalian authored Oct 23, 2024
1 parent 15035fc commit 3365b6b
Show file tree
Hide file tree
Showing 34 changed files with 17,688 additions and 19,311 deletions.
69 changes: 0 additions & 69 deletions docs/features/advanced-usage/evaluation/overview.mdx

This file was deleted.

Empty file.
108 changes: 0 additions & 108 deletions docs/features/webhooks-testing.mdx

This file was deleted.

118 changes: 60 additions & 58 deletions docs/features/webhooks.mdx
Original file line number Diff line number Diff line change
@@ -1,74 +1,76 @@
---
title: "Webhooks: Real-Time LLM Integration & Automation"
sidebarTitle: "Quick start"
description: "Leverage Helicone's powerful webhook system to automate your LLM workflows. Instantly react to events, trigger actions, and integrate with external tools for enhanced AI observability and management. Perfect for developers building robust LLM applications."
twitter:title: "Helicone Webhooks: Real-Time LLM Integration & Automation"
title: "Webhooks"
description: "Webhooks allow you to build or set up integrations which subscribe to certain events on Helicone.ai. When one of those events is triggered, we'll send a HTTP POST payload to the webhook's configured URL. Webhooks can be used to update an external issue tracker, trigger CI builds, update a backup mirror, or even deploy to your production server."
---

## Top use cases
## Beta warning

- **Scoring**: [Score requests based on custom logic](/features/advanced-usage/scores).
- **Data ETL**: Moving data from one system to another.
- **Automations** / Alerts: Trigger actions automatically, such as sending a Slack notification or triggering a webhook to an external tool.
<Warning>
Please note that Webhooks is currently in beta. If you wish to gain access,
kindly contact us via Discord or email us at [email protected]
</Warning>

## Setting up webhooks
## NextJS Example

Head over to the [webhooks page](https://us.helicone.ai/webhooks) to set up a webhook.
```bash

<Frame caption="Webhooks page">
<img src="/images/webhooks/ngrok-example.png" alt="Ngrok example" />
</Frame>

Add the webhook URL and select the events you want to trigger on.

You will want to copy the HMAC key and add it to your webhook environment to validate the signature of the webhook request.

## Configure your webhook route

<Note>
We recommend for startups to use [Cloudflare
workers](https://developers.cloudflare.com/workers/) or [Vercel edge
functions](https://vercel.com/docs/functions/edge-functions) for webhooks,
they are simple to setup and scale very well.

We have a prebuilt [Cloudflare worker](https://deploy.workers.cloudflare.com/?url=https://github.com/Helicone/helicone/tree/main/examples/worker-helicone-scores) that you can use as a starting point.

</Note>

The webhook endpoint is a POST route that accepts the following JSON body:

### POST /webhook

The body of the request will contain the following fields:
npm install graphql-tag @apollo/client
# Alternatively, you can use yarn add graphql-tag @apollo/client
curl -sSL https://raw.githubusercontent.com/Helicone/helicone/main/web/lib/api/graphql/schema/types/graphql.tsx > heliconeTypes.tsx
```

- `request_id`: The request ID of the request that triggered the webhook.
- `request_body`: The body of the request that triggered the webhook.
- `response_body`: The body of the response that triggered the webhook.
```tsx
import type { NextApiRequest, NextApiResponse } from "next";

### Example - NextJS
import gql from "graphql-tag";
import { ApolloClient, InMemoryCache } from "@apollo/client";
import { HeliconeRequest } from "./heliconeTypes";

```tsx
import crypto from "crypto";
const client = new ApolloClient({
uri: "https://www.helicone.ai/api/graphql",
cache: new InMemoryCache(),
headers: {
Authorization: `Bearer ${process.env.HELICONE_API_KEY}`,
},
});

export default async function handler(
req: NextApiRequest,
res: NextApiResponse
res: NextApiResponse<null>
) {
const { request_id, request_body, response_body } = req.body;

// STEP 1: Validate the signature of the webhook request
const hmac = crypto.createHmac("sha256", process.env.HELICONE_WEBHOOK_SECRET);
hmac.update(JSON.stringify({ request_id, request_body, response_body }));
const signature = hmac.digest("hex");
if (signature !== req.headers["helicone-signature"]) {
return res.status(401).json({ error: "Unauthorized" });
}

// STEP 2: Do something with the webhook data
console.log(request_id, request_body, response_body);
// ...
// EX: You can submit a score using our [Scoring API](/features/scoring)

return res.status(200).json({ message: "Webhook received" });
const { request_id: requestId } = req.body as {
request_id: string;
};
const GET_USERS = gql`
query GetHeliconeRequest($id: String!) {
heliconeRequest(filters: [{ requestId: { equals: $id } }]) {
id
responseBody
values {
name
value
}
response
requestBody
prompt
model
properties {
name
value
}
latency
createdAt
costUSD
cacheHits
}
}
`;

const heliconeRequest = await client.query<HeliconeRequest>({
query: GET_USERS,
variables: { id: requestId },
});
console.log(heliconeRequest.data);
res.status(200).json(null);
}
```
64 changes: 0 additions & 64 deletions docs/getting-started/integration-method/litellm-openllmetry.mdx

This file was deleted.

4 changes: 3 additions & 1 deletion docs/getting-started/integration-method/litellm.mdx
Original file line number Diff line number Diff line change
@@ -1,8 +1,10 @@
---
title: "LiteLLM Integration"
sidebarTitle: "Callbacks"
sidebarTitle: "LiteLLM"
description: "Connect Helicone with LiteLLM, a unified interface for multiple LLM providers. Standardize logging and monitoring across various AI models with simple callback or proxy setup."
"twitter:title": "LiteLLM Integration - Helicone OSS LLM Observability"
icon: "cloud"
iconType: "solid"
---

[LiteLLM](https://github.com/BerriAI/litellm) is a model I/O library to standardize API calls to Azure, Anthropic, OpenAI, etc. Here's how you can log your LLM API calls to Helicone from LiteLLM.
Expand Down
Loading

0 comments on commit 3365b6b

Please sign in to comment.