An Efficient Hybrid Deep Learning Accelerator for Compact and Heterogeneous CNNs
Artikel i vetenskaplig tidskrift, 2024

Resource-efficient Convolutional Neural Networks (CNNs) are gaining more attention. These CNNs have relatively low computational and memory requirements. A common denominator among such CNNs is having more heterogeneity than traditional CNNs. This heterogeneity is present at two levels: intra-layer-type and inter-layer-type. Generic accelerators do not capture these levels of heterogeneity, which harms their efficiency. Consequently, researchers have proposed model-specific accelerators with dedicated engines. When designing an accelerator with dedicated engines, one option is to dedicate an engine per CNN layer. We refer to accelerators designed with this approach as single-engine single-layer (SESL). This approach enables optimizing each engine for its specific layer. However, such accelerators are resource-demanding and unscalable. Another option is to design a minimal number of dedicated engines such that each engine handles all layers of one type. We refer to these accelerators as single-engine multiple-layer (SEML). single-engine multiple-layer accelerators capture the inter-layer-type, but not the intra-layer-type heterogeneity.
We propose FiBHA (Fixed Budget Hybrid CNN Accelerator), a hybrid accelerator composed of a single-engine single-layer Layer part and a single-engine multiple-layer part, each processing a subset of CNN layers. FiBHA captures more heterogeneity than single-engine multiple-layer while being more resource-aware and scalable than single-engine single-layer. Moreover, we propose a novel module, Fused Inverted Residual Bottleneck (FIRB), a fine-grained and memory-light single-engine single-layer architecture building block. The proposed architecture is implemented and evaluated using high-level synthesis (HLS) on different FPGAs representing various resource budgets. Our evaluation shows that FiBHA improves the throughput by up to 4x and 2.5x compared to state-of-the-art single-engine single-layer and single-engine multiple-layer accelerators, respectively. Moreover, FiBHA reduces memory and energy consumption compared to a single-engine multiple-layer accelerator. The evaluation also shows that FIRB reduces the required memory by up to 54%, and energy requirements by up to 35% compared to traditional pipelining.

hardware software co-design

FPGA

Convolutional neural networks (CNNs)

pipelined accelerator

deep learning

hybrid accelerator

Författare

Fareed Mohammad Qararyah

Chalmers, Data- och informationsteknik, Datorteknik

Muhammad Waqar Azhar

Chalmers, Data- och informationsteknik, Datorteknik

Pedro Petersen Moura Trancoso

Chalmers, Data- och informationsteknik, Datorteknik

Transactions on Architecture and Code Optimization

1544-3566 (ISSN) 1544-3973 (eISSN)

Vol. 21 2 25

Very Efficient Deep Learning in IOT (VEDLIoT)

Europeiska kommissionen (EU) (EC/H2020/957197), 2020-11-01 -- 2023-10-31.

Ämneskategorier

Annan teknik

Datavetenskap (datalogi)

DOI

10.1145/3639823

Mer information

Senast uppdaterat

2024-06-12