DSAG: A Mixed Synchronous-Asynchronous Iterative Method for Straggler-Resilient Learning
Artikel i vetenskaplig tidskrift, 2023

We consider straggler-resilient learning. In many previous works, e.g., in the coded computing literature, straggling is modeled as random delays that are independent and identically distributed between workers. However, in many practical scenarios, a given worker may straggle over an extended period of time. We propose a latency model that captures this behavior and is substantiated by traces collected on Microsoft Azure, Amazon Web Services (AWS), and a small local cluster. Building on this model, we propose DSAG, a mixed synchronous-asynchronous iterative optimization method, based on the stochastic average gradient (SAG) method, that combines timely and stale results. We also propose a dynamic load-balancing strategy to further reduce the impact of straggling workers. We evaluate DSAG for principal component analysis, cast as a finite-sum optimization problem, of a large genomics dataset, and for logistic regression on a cluster composed of 100 workers on AWS, and find that DSAG is up to about 50% faster than SAG, and more than twice as fast as coded computing methods, for the particular scenario that we consider.

stochastic average gradient (SAG)

Coded computing


variance reduction

straggler mitigation

principal component analysis (PCA)

iterative optimization


Albin Severinson


Universitetet i Bergen

Simula UiB

Eirik Rosnes

Simula UiB

Salim El Rouayheb

Department of Electrical and Computer Engineering

Alexandre Graell I Amat

Chalmers, Elektroteknik, Kommunikation, Antenner och Optiska Nätverk

Simula UiB

IEEE Transactions on Communications

00906778 (ISSN) 15580857 (eISSN)

Vol. 71 2 808-822

Pålitlig och säker kodad kantberäkning

Vetenskapsrådet (VR) (2020-03687), 2021-01-01 -- 2024-12-31.


Annan data- och informationsvetenskap


Sannolikhetsteori och statistik



Mer information

Senast uppdaterat