Combining Relevance and Magnitude for Resource-saving DNN Pruning
Artikel i vetenskaplig tidskrift, 2026

Pruning neural networks, i.e., removing some of their parameters whilst retaining their accuracy, is one of the main ways to reduce the latency of a machine learning pipeline, especially in resource- and/or bandwidth-constrained scenarios. In this context, the pruning technique, i.e., how to choose the parameters to remove, is critical to the system performance. In this paper, we propose a novel pruning approach, called FlexRel and predicated upon combining training-time and inference-time information, namely, parameter magnitude and relevance, in order to improve the resulting accuracy whilst saving both computational resources and bandwidth. Our performance evaluation shows that FlexRel is able to achieve higher pruning factors, saving over 35% bandwidth for typical accuracy targets.

Distributed learning

Resource utilization

Machine Learning model compression

Författare

Carla Fabiana Chiasserini

Politecnico di Torino

Chalmers, Data- och informationsteknik, Dator- och nätverkssystem

Consiglo Nazionale Delle Richerche

Consorzio Nazionale Interuniversitario per le Telecomunicazioni (CNIT)

Francesco Malandrino

Consiglo Nazionale Delle Richerche

Consorzio Nazionale Interuniversitario per le Telecomunicazioni (CNIT)

N. Molner

Instituto de Telecomunicaciones y Aplicaciones Multimedia

Z. Zhao

Politecnico di Torino

IEEE Network

0890-8044 (ISSN) 1558156x (eISSN)

Vol. 40 1

Ämneskategorier (SSIF 2025)

Datavetenskap (datalogi)

DOI

10.1109/MNET.2025.3556212

Mer information

Senast uppdaterat

2026-01-29