Coding for Straggler Mitigation in Federated Learning
Paper in proceeding, 2022

We present a novel coded federated learning (FL) scheme for linear regression that mitigates the effect of straggling devices while retaining the privacy level of conventional FL. The proposed scheme combines one-time padding to preserve privacy and gradient codes to yield resiliency against stragglers and consists of two phases. In the first phase, the devices share a one-time padded version of their local data with a subset of other devices. In the second phase, the devices and the central server collaboratively and iteratively train a global linear model using gradient codes on the one-time padded local data. To apply one-time padding to real data, our scheme exploits a fixed-point arithmetic representation of the data. Unlike the coded FL scheme recently introduced by Prakash et al., the proposed scheme maintains the same level of privacy as conventional FL while achieving a similar training time. Compared to conventional FL, we show that the proposed scheme achieves a training speed-up factor of 6.6 and 9.2 on the MNIST and Fashion-MNIST datasets for an accuracy of 95% and 85%, respectively.

Author

Siddhartha Kumar

Simula UiB

Reent Schlegel

Simula UiB

Eirik Rosnes

Simula UiB

Alexandre Graell I Amat

Chalmers, Electrical Engineering, Communication, Antennas and Optical Networks

Simula UiB

IEEE International Conference on Communications

15503607 (ISSN)

Vol. 2022-May 4962-4967
9781538683477 (ISBN)

2022 IEEE International Conference on Communications, ICC 2022
Seoul, South Korea,

Reliable and Secure Coded Edge Computing

Swedish Research Council (VR) (2020-03687), 2021-01-01 -- 2024-12-31.

Subject Categories

Computer Engineering

Computational Mathematics

Geophysics

DOI

10.1109/ICC45855.2022.9838986

More information

Latest update

10/25/2023