Plese see the latest BentoML documentation on OCI-container based deployment workflow: https://docs.bentoml.com/
bentoctl helps deploy any machine learning models as production-ready API endpoints on the cloud, supporting AWS SageMaker, AWS Lambda, EC2, Google Compute Engine, Azure, Heroku and more.
π Join our Slack community today!
β¨ Looking deploy your ML service quickly? You can checkout BentoML Cloud for the easiest and fastest way to deploy your bento. It's a full featured, serverless environment with a model repository and built in monitoring and logging.
- Framework-agnostic model deployment for Tensorflow, PyTorch, XGBoost, Scikit-Learn, ONNX, and many more via BentoML: the unified model serving framework.
- Simplify the deployment lifecycle of deploy, update, delete, and rollback.
- Take full advantage of BentoML's performance optimizations and cloud platform features out-of-the-box.
- Tailor bentoctl to your DevOps needs by customizing deployment operator and Terraform templates.
- π» Quickstart Guide - Deploy your first model to AWS Lambda as a serverless API endpoint.
- π Core Concepts - Learn the core concepts in bentoctl.
- πΉοΈ Operators List - List of official operators and advanced configuration options.
- π¬ Join Community Slack - Get help from our community and maintainers.
- AWS Lambda
- AWS SageMaker
- AWS EC2
- Google Cloud Run
- Google Compute Engine
- Azure Container Instances
- Heroku
- To report a bug or suggest a feature request, use GitHub Issues.
- For other discussions, use Github Discussions under the BentoML repo
- To receive release announcements and get support, join us on Slack.
There are many ways to contribute to the project:
- Create and share new operators. Use deployment operator template to get started.
- If you have any feedback on the project, share it with the community in Github Discussions under the BentoML repo.
- Report issues you're facing and "Thumbs up" on issues and feature requests that are relevant to you.
- Investigate bugs and reviewing other developer's pull requests.