Dependable Distributed Training of Compressed Machine Learning Models
Paper in proceeding, 2024

The existing work on the distributed training of machine learning (ML) models has consistently overlooked the distribution of the achieved learning quality, focusing instead on its average value. This leads to a poor dependability of the resulting ML models, whose performance may be much worse than expected. We fill this gap by proposing DepL, a framework for dependable learning orchestration, able to make high-quality, efficient decisions on (i) the data to leverage for learning, (ii) the models to use and when to switch among them, and (iii) the clusters of nodes, and the resources thereof, to exploit. For concreteness, we consider as possible available models a full DNN and its compressed versions. Unlike previous studies, DepL guarantees that a target learning quality is reached with a target probability, while keeping the training cost at a minimum. We prove that DepL has constant competitive ratio and polynomial complexity, and show that it outperforms the state-of-the-art by over 27% and closely matches the optimum.

dependable learning

network support to machine learning

Distributed learning

learning guarantees

Author

Francesco Malandrino

Consorzio Nazionale Interuniversitario per le Telecomunicazioni (CNIT)

Consiglo Nazionale Delle Richerche

Giuseppe Di Giacomo

Polytechnic University of Turin

Marco Levorato

University of California at Irvine (UCI)

Carla Fabiana Chiasserini

Network and Systems

Consorzio Nazionale Interuniversitario per le Telecomunicazioni (CNIT)

Polytechnic University of Turin

Consiglo Nazionale Delle Richerche

Proceedings - 2024 IEEE 25th International Symposium on a World of Wireless, Mobile and Multimedia Networks, WoWMoM 2024

147-156
9798350394665 (ISBN)

25th IEEE International Symposium on a World of Wireless, Mobile and Multimedia Networks, WoWMoM 2024
Perth, Australia,

Subject Categories

Computer Science

Computer Systems

DOI

10.1109/WoWMoM60985.2024.00036

More information

Latest update

7/30/2024