A fast, easy-to-use, production-ready inference server for computer vision supporting deployment of many popular model architectures and fine-tuned models.
-
Updated
Nov 11, 2024 - Python
A fast, easy-to-use, production-ready inference server for computer vision supporting deployment of many popular model architectures and fine-tuned models.
This repository allows you to get started with a gui based training a State-of-the-art Deep Learning model with little to no configuration needed! NoCode training with TensorFlow has never been so easy.
The simplest way to serve AI/ML models in production
The Qualcomm® AI Hub Models are a collection of state-of-the-art machine learning models optimized for performance (latency, memory etc.) and ready to deploy on Qualcomm® devices.
A Beautiful Flask Web API for Yolov7 (and custom) models
Train and predict your model on pre-trained deep learning models through the GUI (web app). No more many parameters, no more data preprocessing.
CLI & Python API to easily summarize text-based files with transformers
Framework agnostic computer vision inference. Run 1000+ models by changing only one line of code. Supports models from transformers, timm, ultralytics, vllm, ollama and your custom model.
This repository allows you to get started with training a State-of-the-art Deep Learning model with little to no configuration needed! You provide your labeled dataset and you can start the training right away. You can even test your model with our built-in Inference REST API. Training classification models with GluonCV has never been so easy.
Unofficial (Golang) Go bindings for the Hugging Face Inference API
The Qualcomm® AI Hub apps are a collection of state-of-the-art machine learning models optimized for performance (latency, memory etc.) and ready to deploy on Qualcomm® devices.
This is a repository for an image classification inference API using the Gluoncv framework. The inference REST API works on CPU/GPU. It's supported on Windows and Linux Operating systems. Models trained using our Gluoncv Classification training repository can be deployed in this API. Several models can be loaded and used at the same time.
🤗 Hugging Face Inference Client written in Go
Eternal is an experimental platform for machine learning models and workflows.
Typescript wrapper for the Hugging Face Inference API.
the small distributed language model toolkit; fine-tune state-of-the-art LLMs anywhere, rapidly
Llama3.java Inference engine with OpenAI Chat Completion REST API/
Describing How to Enable OpenVINO Execution Provider for ONNX Runtime
Tool for test diferents large language models without code.
An open source framework for Retrieval-Augmented System (RAG) uses semantic search helps to retrieve the expected results and generate human readable conversational response with the help of LLM (Large Language Model).
Add a description, image, and links to the inference-api topic page so that developers can more easily learn about it.
To associate your repository with the inference-api topic, visit your repo's landing page and select "manage topics."