«SSL» re-implements the paper Learning Structured Sparsity in Deep Neural Networks
In addition to the different pruning positions mentioned in the paper (filter_wise/channel_wise/filter_and_channel_wise/depth_wise
), the warehouse also tried different weight functions (group_lasso/mean_abs/mean/sum_abs/sum
).
See
More training statistics can see:
Based on Group Lasso, SSL can achieve Filter/Channel/Filter Shape/Depth pruning.
$ pip install -r requirements.txt
First, you need set env for PYTHONPATH
and CUDA_VISIBLE_DEVICES
$ export PYTHONPATH=<project root path>
$ export CUDA_VISIBLE_DEVICES=0
Then, begin train-prune-finetuning
- For train
$ python tools/train.py -cfg=configs/vggnet/vgg16_bn_cifar100_224_e100_sgd_mslr_ssl_filter_wise_1e_5.yaml
- For prune
$ python tools/prune/prune_vggnet.py
- For fine-tuning
$ python tools/train.py -cfg=configs/vggnet/refine_mean_abs_0_2_vgg16_bn_cifar100_224_e100_sgd_mslr_ssl_filter_wise_1e_5.yaml
Finally, set the fine-tuning model path in the PRELOADED
option of the configuration file
$ python tools/test.py -cfg=configs/vggnet/refine_mean_abs_0_2_vgg16_bn_cifar100_224_e100_sgd_mslr_ssl_filter_wise_1e_5.yaml
- zhujian - Initial work - zjykzj
@misc{wen2016learning,
title={Learning Structured Sparsity in Deep Neural Networks},
author={Wei Wen and Chunpeng Wu and Yandan Wang and Yiran Chen and Hai Li},
year={2016},
eprint={1608.03665},
archivePrefix={arXiv},
primaryClass={cs.NE}
}
Anyone's participation is welcome! Open an issue or submit PRs.
Small note:
- Git submission specifications should be complied with Conventional Commits
- If versioned, please conform to the Semantic Versioning 2.0.0 specification
- If editing the README, please conform to the standard-readme specification.
Apache License 2.0 © 2021 zjykzj