Skip to content

v0.3.0

Compare
Choose a tag to compare
@andrei-stoian-zama andrei-stoian-zama released this 06 Sep 13:51
· 4 commits to release/0.3.x since this release

Summary

Concrete-ML now gives the user the possibility to deploy models in a client-server setting, separating encryption and decryption from execution, which can now be done by a remote machine. The release also adds support for new models and for new neural network layers, but also allows importing ONNX directly, thus supporting some keras/tensorflow models. Furthermore, this release provides some support for importing Quantization Aware Training neural networks, which contain quantizers in the operation graph and can be built with brevitas.

Links

Docker Image: zamafhe/concrete-ml:v0.3.0
pip: https://pypi.org/project/concrete-ml/0.3.0
Documentation: https://docs.zama.ai/concrete-ml

v0.3.0

Feature

  • Allow recompiling from onnx model (9b69e73)
  • Adding support for p_error (fe03441)
  • Adding GELU activation (c732d15)
  • Add random_state to models for client server reproducibility (6f887d0)
  • Add QAT notebook (64c4512)
  • Import Brevitas QAT networks (6c40a0c)
  • Integration of the encrypt decrypt api (3c2a68a)
  • Support more input types in predict() (76e142c)
  • Support more input types in fit() (1fa74f7)
  • Add Round and Pow operators (d6880ce)
  • Ability to import Quantization Aware Training networks (c1bb947)
  • Compile user supplied ONNX to support keras/tf (fae3dc5)
  • Implement Generalized Linear Regression models (8e8e025)
  • Add SoftSign activation (3ce338e)
  • Adding more activations (e43ce5c)
  • Implement Poisson Regression (09eefa5)
  • Use the 8b of precision of Concrete Numpy (249c712)
  • Add ONNX flatten support (c5f215f)
  • Handle more tree-based classifiers (950cc6c)
  • Add Batch Normalization ONNX operator (7969739)
  • Add Where, Greater, Mul, Sub ONNX operator support (f939149)
  • Add ONNX Average Pooling and Pad operator (40f1ef9)
  • Add more activation functions (26b2221)

Fix

  • Make tree inference faster by creating new numpy boolean operators (206caa5)
  • Set a compatible version for protobuf (97ccfc0)
  • Improve IRIS FCNN FHE accuracy and visualization (02e497c)
  • Replace init call by set_params (111419e)
  • Fix wrong fit_benchmark in linear models (f257def)
  • Fix GridSearchCV on trees (b614285)
  • Support decision tree with custom classes (baa3b4d)

Documentation

  • Major refresh of 0.3 doc (e5e3205)
  • Add sentiment classification notebook (68ae7d0)
  • Restrict hyperparameters in titanic notebook for faster inference (9b63c8a)
  • QAT explanation (9430ba6)
  • Document ONNX compilation (972d05e)
  • Explain quantized vs float ops and fusing (fb9b409)
  • Add doc for pandas support (34652ce)
  • Add notebook and docs client server api (99ff1e7)
  • Developing custom models (1c6f571)
  • Explain built-in quantized Neural Networks (972d464)
  • Add a notebook for Kaggle Titanic competition (0a44853)