An Efficient Hybrid Deep Learning Accelerator for Compact and Heterogeneous CNNs
Journal article, 2024
We propose FiBHA (Fixed Budget Hybrid CNN Accelerator), a hybrid accelerator composed of a single-engine single-layer Layer part and a single-engine multiple-layer part, each processing a subset of CNN layers. FiBHA captures more heterogeneity than single-engine multiple-layer while being more resource-aware and scalable than single-engine single-layer. Moreover, we propose a novel module, Fused Inverted Residual Bottleneck (FIRB), a fine-grained and memory-light single-engine single-layer architecture building block. The proposed architecture is implemented and evaluated using high-level synthesis (HLS) on different FPGAs representing various resource budgets. Our evaluation shows that FiBHA improves the throughput by up to 4x and 2.5x compared to state-of-the-art single-engine single-layer and single-engine multiple-layer accelerators, respectively. Moreover, FiBHA reduces memory and energy consumption compared to a single-engine multiple-layer accelerator. The evaluation also shows that FIRB reduces the required memory by up to 54%, and energy requirements by up to 35% compared to traditional pipelining.
hardware software co-design
FPGA
Convolutional neural networks (CNNs)
pipelined accelerator
deep learning
hybrid accelerator
Author
Fareed Mohammad Qararyah
Chalmers, Computer Science and Engineering (Chalmers), Computer Engineering (Chalmers)
Muhammad Waqar Azhar
Chalmers, Computer Science and Engineering (Chalmers), Computer Engineering (Chalmers)
Pedro Petersen Moura Trancoso
Chalmers, Computer Science and Engineering (Chalmers), Computer Engineering (Chalmers)
Transactions on Architecture and Code Optimization
1544-3566 (ISSN) 1544-3973 (eISSN)
Vol. 21 2 25Very Efficient Deep Learning in IOT (VEDLIoT)
European Commission (EC) (EC/H2020/957197), 2020-11-01 -- 2023-10-31.
Subject Categories
Other Engineering and Technologies
Computer Science
DOI
10.1145/3639823