This repository contains code for Paper EH-DNAS: End-to-End Hardware-aware Differentiable Neural Architecture Search
We now provide our collected hardware performance dataset using E2E-Perf, the pre-trained hardware loss, and the final searched model. We also provide code for training hardware loss, searching architecture and retraining. All on DARTS search space.
To install the dependencies, change cudatoolkit version and run the following,
conda create --name myenv
conda activate myenv
conda install pytorch torchvision torchaudio cudatoolkit=11.0 -c pytorch -c nvidia
The datasets used in our experiments are availiable at the following links:
- We provide the sampled architectures as pickle file in dnas/DARTS/hwdataset_100w/ for directly use
- To resample and store input as pickle file in dnas/DARTS/hwdataset_100w/ (1000K train, 200K val, 200K test):
python dnas/DARTS/darts_sampler.py
- We provide the dataset output as pickle file in dnas/DARTS/hwdataset_100w/ for directly use
- We also provide the trained hardware loss models in dnas/acc/ and dnas/pip for directly use
- Train the hardware loss model with pipeline paradigm
python hw_loss/main.py --config hw_loss/config_pip.yaml --para=pip
- Train the hardware loss model with generic paradigm
python hw_loss/main.py --config hw_loss/config_acc.yaml --para=acc
- This will save the searching log in search-save-time/log.txt
- Search with hardware loss for pipeline paradigm
python dnas/DARTS/train_search.py --hw_loss_type acc --hw_loss_rate 0.0001 --save acc --epochs 25
- Search with hardware loss for generic paradigm
python dnas/DARTS/train_search.py --hw_loss_type pip --hw_loss_rate 0.0005 --save pip --epochs 25
- Look through the search log and save searched architecture genotype in dnas/DARTS/genotypes.py with a $arch_name
- We saved our searched architectures in the paper in dnas/DARTS/genotypes.py as OURS_PIP and OURS_ACC for dirctly use
- Construct model for CIFAR10 and train from scratch to evaluate classification performance:
python dnas/DARTS/train.py --arch $arch_name --cutout --auxiliary --save $arch_name
- Train from scratch to evaluate classification performance:
python dnas/DARTS/train_imagenet.py --batch_size 512 --epochs 150 --data $path_to_ImageNet --arch $arch_name --auxiliary --save img-$arch_name --parallel --report_freq 1000
If find this code helpful, please kindly cite the following paper
@article{jiang2021eh,
title={EH-DNAS: End-to-End Hardware-aware Differentiable Neural Architecture Search},
author={Jiang, Qian and Zhang, Xiaofan and Chen, Deming and Do, Minh N and Yeh, Raymond A},
journal={arXiv preprint arXiv:2111.12299},
year={2021}
}
DARTS code originate from DARTS repo.