This repository has been archived by the owner on Jul 18, 2024. It is now read-only.
Intel® End-to-End AI Optimization Kit release v1.1
Highlights
This release introduces a new component: Model Adaptor. It adopts transfer learning methodologies to reduce training time, improve inference throughput and reduce data labeling by taking the advantage of public pretrained models and datasets. The three methods in Model Adaptor are: Finetuner, Distiller, and Domain Adapter. Currently, model adaptor supports ResNet, BERT, GPT2, 3D Unet models, covering Image Classification, Natural Language Processing and Medical Segmentation domains.
This release provides following major features:
- Model Adaptor Finetuner
- Model Adaptor Distiller
- Model Adaptor Domain Adaptor
- Support Hugging Face models in training free NAS
Improvements
- Updated demo with colab click-to-run support
- Updated docker with jupyter support
Papers and Blogs
- The Parallel Universe Magazine - Accelerate AI Pipelines with New End-to-End AI Kit
- Multi-Model, Hardware-Aware Train-Free Neural Architecture Search
- SigOpt Blog - Enhance Multi-Model Hardware-Aware Train-Free NAS with SigOpt
- The Intel® SIHG4SR Solution for the ACM RecSys Challenge 2022
Versions and Components
- TensorFlow 2.10.0
- PyTorch 1.5, 1.12
- Intel® Extension for TensorFlow 2.10.x
- Intel® Extension for Pytorch 0.2, 1.12.x
- Horovod 0.26
- Python 3.9.12
Links
Full Changelog: https://github.com/intel/e2eAIOK/commits/v1.1