Skip to content

Latest commit

 

History

History
229 lines (163 loc) · 6.65 KB

README.md

File metadata and controls

229 lines (163 loc) · 6.65 KB

Pezzo is a fully cloud-native and open-source LLMOps platform. Seamlessly observe and monitor your AI operations, troubleshoot issues, save up to 90% on costs and latency, collaborate and manage your prompts in one place, and instantly deliver AI changes.

     

Contributor Covenant License

✨ Features

Documentation

Click here to navigate to the Official Pezzo Documentation

In the documentation, you can find information on how to use Pezzo, its architecture, including tutorials and recipes for varius use cases and LLM providers.

Supported Clients

Feature Node.jsDocs PythonDocs LangChain
Prompt Management
Observability
Caching

Looking for a client that's not listed here? Open an issue and let us know!

Getting Started - Docker Compose

If you simplay want to run the full Pezzo stack locally, check out Running With Docker Compose in the documentation.

If you want to run Pezzo in development mode, continue reading.

Prerequisites

Install dependencies

Install nvm to manage different versions of node

brew install nvm

## Add below lines to your ~/.zshrc or ~/.bashrc files

export NVM_DIR="$HOME/.nvm"
[ -s "$NVM_DIR/nvm.sh" ] && \. "$NVM_DIR/nvm.sh"

Install NPM dependencies by running:

nvm use

## Confirm node version
node -v
>> v18.16.0


npm install

Spin up infrastructure dependencies via Docker Compose

Pezzo is entirely cloud-native and relies solely on open-source technologies such as PostgreSQL, ClickHouse, Redis and Supertokens.

You can run these dependencies via Docker Compose:

docker-compose -f docker-compose.infra.yaml up

GAI API dependecy

You may set up GAI platform locally and execute it. Follow this guide for set up.

Or you can use port forwarding to access the dev service


## Yes dev service is in prd env :)
$(spaas aws configure --system sn-feature --env prd)
kubectl -n sn-feature port-forward service/gai-api-backend-dev 8081:80

Start Pezzo

Deploy Prisma migrations:

npx dotenv-cli -e apps/server/.env -- npx prisma migrate deploy --schema apps/server/prisma/schema.prisma

Run the server:

npx nx serve server

The server is now running. You can verify that by navigating to http://localhost:3000/api/healthz.

In development mode, you want to run codegen in watch mode, so whenever you make changes to the schema, types are generated automatically. After running the server, run the following in a separate terminal Window:

npm run graphql:codegen:watch

This will connect codegen directly to the server and keep your GraphQL schema up-to-date as you make changes.

Finally, you are ready to run the Pezzo Console:

npx nx serve console

Login via http://localhost:4200/admin/login and sign up to create a new admin user. Also use the same URL and chose sign in if having any issues accessing the UI

For debugging server app in vscode confirm runtimeExecutable inside debug config

That's it! The Pezzo Console is now accessible at http://localhost:4200 🚀

Contributing

We welcome contributions from the community! Please feel free to submit pull requests or create issues for bugs or feature suggestions.

If you want to contribute but not sure how, join our Discord and we'll be happy to help you out!

Please check out CONTRIBUTING.md before contributing.

License

This repository's source code is available under the Apache 2.0 License.