Skip to content

AIVM is a cutting-edge framework designed for privacy-preserving inference using advanced cryptographic protocols. With AIVM, you can deploy a local development network (devnet) to explore private inference using provided examples or custom models.

License

Notifications You must be signed in to change notification settings

NillionNetwork/nillion-aivm

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

6 Commits
 
 
 
 
 
 

Repository files navigation

Nillion AIVM

AIVM is a cutting-edge framework designed for privacy-preserving inference using advanced cryptographic protocols. With AIVM, you can deploy a local development network (devnet) to explore private inference using provided examples or custom models.

Table of Contents

Installing AIVM

  1. Create a virtual environment:

    python3 -m venv .venv
  2. Activate the virtual environment:

    On Linux/macOS:

    source .venv/bin/activate

    On Windows:

    .\venv\Scripts\activate
  3. Install the package:

    If you are going to execute the examples execute:

    pip install "nillion-aivm[examples]"

    Otherwise, if you are going to produce your own code, you can just:

    pip install nillion-aivm

Running AIVM

  1. Start the AIVM devnet:

    aivm-devnet
  2. Open the provided Jupyter notebook examples/getting-started.ipynb to run private inference examples on AIVM.

  3. After completing your tasks, terminate the devnet process by pressing CTRL+C.

Usage

For additional usage, refer to the examples folder, which demonstrates how to set up private inference workflows using AIVM.

Performing Secure Inference

Basic Usage

  1. First, import the AIVM client and check available models:
import aivm_client as aic

# List all supported models
available_models = aic.get_supported_models()
print(available_models)
  1. Prepare your input data. Here's an example using PyTorch to generate a random input:
import torch

# Create a sample input (e.g., for LeNet5 MNIST)
random_input = torch.randn((1, 1, 28, 28))  # Batch size 1, 1 channel, 28x28 pixels
  1. Encrypt your input using the appropriate Cryptensor:
# Encrypt the input
encrypted_input = aic.LeNet5Cryptensor(random_input)
  1. Perform secure inference:
# Get prediction while maintaining privacy
result = aic.get_prediction(encrypted_input, "LeNet5MNIST")

The get_prediction function automatically handles the secure computation protocol with the aivm-devnet nodes, ensuring that your input data remains private throughout the inference process.

Custom Model Upload

You can deploy your own trained models to AIVM, provided they follow the supported architectures (BertTiny or LeNet5).

Uploading Custom Models

  1. Import the AIVM client:
import aivm_client as aic
  1. Upload your custom model:
# For BertTiny models
aic.upload_bert_tiny_model(model_path, "MyCustomBertTiny")

# For LeNet5 models
aic.upload_lenet5_model(model_path, "MyCustomLeNet5")
  1. Perform inference with your custom model:
# For BertTiny models
result = aic.get_prediction(private_berttiny_input, "MyCustomBertTiny")

# For LeNet5 models
result = aic.get_prediction(private_lenet5_input, "MyCustomLeNet5")

License

This project is licensed under the MIT License.

About

AIVM is a cutting-edge framework designed for privacy-preserving inference using advanced cryptographic protocols. With AIVM, you can deploy a local development network (devnet) to explore private inference using provided examples or custom models.

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published