💡 Highlights
We are delighted to annonce the new release of N2D2, this version introduces the addition of the SAT method and the STM32 export to the open-source version of N2D2 as well as some bug fixes in the Python API.
We would like to thank everyone who contributed to this release!
- The quantization method SAT is now open source: The SAT method is now available with both the INI and Python API. To get started on this feature, you can follow tutorials available in the documentation;
- Various improvement for the Python API: We have provided various improvement to the Python API, focusing on better handling transformations, interoperability with Keras and Pytorch and better handling of the exports;
- Our STM32 export is now available for everyone: You can now export your model for the STM32 boards. More details on the documentation
- Pruning module added: we have added a pruning module in order to use this method in N2D2 to compress models. This addition completes the list of compression techniques already present in the framework with the quantization
The rate of the update on N2D2 is decreasing as we are working on an exciting complete refont of the software. 💪
More news coming soon ! 🤗
🐛 Bug Fixes
Python API
- Fixed infinite loop error in summary method after using DeepNetCell.remove();
- Fixed Torch to N2D2 conversion in the case of a view tensor;
- Fixed pip install method for machine with one processor;
- Fixed an error with the way we saved BatchNorm statistics. Now, statistics are not saved, instead, we temporaly update the BatchNorm propagate to avoid BatchNorm fuse and statitics updates.
- Fixed a bug with the try of quantizing the Reshape layer during PTQ.
- Fixed extraction process for gz files in the installation dataset script
Core C++
- Fixed Mapping and Group handling on CPU for customized Mapping;
- Fixed a bug which could occur during the compilation on 32-bit processors
- Added a Scheduler class to implement local scheduling
- Fixed a bug during CPP export generation
⚙️ Refactor
Python API
- Torch Interrop now uses ONNX simplifier;
- Added function
n2d2.deepnet.associate_provider_to_deepnet
to reduce code redundancy inexport.py
andquantization.py
; - Fixed typo in Pool and Scaling is_exportable_to method;
- Added option to disable memory optimizations for the C++ export.
Exports
- Changed the path of the calibration folder (now generated into the statistics folder)
🚀 New features
Python API
- Torch and Keras Interrop wrap method now allow to set the opset version used to generate the ONNX;
- Export now check if
dataprovider
contains data, otherwise prints a warning; - Added analysis tools to DeepNetCell;
- Added set_provider to DeepNetCell;
- Updated
data_augmentation.py
example with adata_path
parameter; - Added a method in Database to download data (not available for every database);
- Updated
converter.from_N2D2
to support conversion of C++ transformation to python; - Added
DataProvider.get_transformations
method; - Added
DataProvider.normalize_stimuli
to normalize stimuli between [0,1]; - Added
Reshape
transformation to the Python API. - Added
SATCell
object to quantize your layers with the SAT method; - Added
PruneCell
object to prune the weights in the associated layer;
Exports
- Added the STM32 export (available versions for STM32H7 and STM32L4)