Combining Relevance and Magnitude for Resource-saving DNN Pruning
Artikel i vetenskaplig tidskrift, 2025

Pruning neural networks, i.e., removing some of their parameters whilst retaining their accuracy, is one of the main ways to reduce the latency of a machine learning pipeline, especially in resource- and/or bandwidth-constrained scenarios. In this context, the pruning technique, i.e., how to choose the parameters to remove, is critical to the system performance. In this paper, we propose a novel pruning approach, called FlexRel and predicated upon combining training-time and inference-time information, namely, parameter magnitude and relevance, in order to improve the resulting accuracy whilst saving both computational resources and bandwidth. Our performance evaluation shows that FlexRel is able to achieve higher pruning factors, saving over 35% bandwidth for typical accuracy targets.

Resource utilization

Machine Learning model compression

Distributed learning

Författare

Carla Fabiana Chiasserini

Consiglo Nazionale Delle Richerche

Consorzio Nazionale Interuniversitario per le Telecomunicazioni (CNIT)

Politecnico di Torino

Francesco Malandrino

Consorzio Nazionale Interuniversitario per le Telecomunicazioni (CNIT)

Consiglo Nazionale Delle Richerche

N. Molner

Instituto de Telecomunicaciones y Aplicaciones Multimedia

Z. Zhao

Politecnico di Torino

IEEE Network

0890-8044 (ISSN) 1558156x (eISSN)

Vol. In Press

Ämneskategorier (SSIF 2025)

Datavetenskap (datalogi)

DOI

10.1109/MNET.2025.3556212

Mer information

Senast uppdaterat

2025-04-23