Model-Platform Optimized Deep Neural Network Accelerator Generation through Mixed-integer Geometric Programming
Link to the paper: https://ieeexplore.ieee.org/abstract/document/10171535
Citation:
@INPROCEEDINGS{10171535,
author={Ding, Yuhao and Wu, Jiajun and Gao, Yizhao and Wang, Maolin and So, Hayden Kwok-Hay},
booktitle={2023 IEEE 31st Annual International Symposium on Field-Programmable Custom Computing Machines (FCCM)},
title={Model-Platform Optimized Deep Neural Network Accelerator Generation through Mixed-Integer Geometric Programming},
year={2023},
volume={},
number={},
pages={83-93},
doi={10.1109/FCCM57271.2023.00018}}
AGNA is an open-source hardware generator for Deep Neural Network (DNN). Given the specifications of target DNN model and FPGA platform, AGNA can produce FPGA accelerator that is optimized for target model-platform combination. AGNA can be generally separated into software and hardware parts:
- Software: AGNA analyzes the specifications of each layer and solves mixed-integer geometric programming. AGNA first generates a high-efficiency accelerator based on a general PE array architecture. Then based on the generated accelerator, AGNA generates the schedule and instruction of each layer.
- Hardware: AGNA provides the synthesizable source code of each hardware component. The target accelerator can be built by substituting the parameter with the generated one from the software. A template project for zcu102 is also provided.
-
Run in docker (Optional):
We also provide a docker image that contains the necessary environment (except for Vivado). The image can be built by:
docker build --build-arg HOST_UID=`id -u` --build-arg HOST_GID=`id -g` -t agna-local .
Run docker image:
docker run -it -v `pwd`:/home/user/workspace agna-local
Current directory will be mounted to
/home/user/workspace
in the container. -
Run software:
cd software export SCIPOPTDIR=<SCIPOPT_PATH> # not required in docker conda env create --file environment.yml conda activate agna make all PLATFORM=<TARGET_PLATFORM> MODEL=<TARGET_MODEL>
Make sure
<SCIPOPT_PATH>/bin/scip
is executable and specification files are available atsoftware/spec/platforms/<TARGET_PLATFORM>.json
andsoftware/spec/models/<TARGET_MODEL>.json
.Generated architecture and schedule are in
software/results/<TARGET_PLATFORM>-<TARGET_MODEL>
. -
Build hardware:
cd hardware make all
Generated project and bitstream are in
hardware/prj
.
-
Prerequisite:
sudo apt update sudo apt install -y wget cmake g++ m4 xz-utils libgmp-dev unzip zlib1g-dev libboost-program-options-dev libboost-serialization-dev libboost-regex-dev libboost-iostreams-dev libtbb-dev libreadline-dev pkg-config git liblapack-dev libgsl-dev flex bison libcliquer-dev gfortran file dpkg-dev libopenblas-dev rpm sudo apt install -y libopenmpi-dev libomp-dev
-
Build
Ipopt
:mkdir coinbrew && cd coinbrew wget https://raw.githubusercontent.com/coin-or/coinbrew/master/coinbrew chmod +x coinbrew ./coinbrew fetch [email protected] export IPOPT_DIR=/tools/Ipopt # install directory of Ipopt, could be other places mkdir -p ${IPOPT_DIR} ./coinbrew build Ipopt --prefix=${IPOPT_DIR} --test --no-prompt --verbosity=3 sudo ./coinbrew install Ipopt --no-prompt
-
Build
SCIPOpt
:-
Download scipoptsuite-8.0.3.tgz.
-
Install:
tar xzf scipoptsuite-8.0.3.tgz cd scipoptsuite-8.0.3 mkdir build && cd build export SCIPOPT_PATH=/tools/scipoptsuite-8.0.3 # install directory of SCIPOPT, could be other places cmake .. -DCMAKE_INSTALL_PREFIX=${SCIPOPT_PATH} -DIPOPT_DIR=${IPOPT_DIR} -DTPI=omp make make check make install
-