The Impact of Synchronization in Parallel Stochastic Gradient Descent
Paper in proceeding, 2022

In this paper, we discuss our and related work in the domain of efficient parallel optimization, using Stochastic Gradient Descent, for fast and stable convergence in prominent machine learning applications. We outline the results in the context of aspects and challenges regarding synchronization, consistency, staleness and parallel-aware adaptiveness, focusing on the impact on the overall convergence.

Stochastic gradient descent

Machine Learning

Lock-free

Author

Karl Bäckström

Network and Systems

Marina Papatriantafilou

Network and Systems

Philippas Tsigas

Network and Systems

Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)

03029743 (ISSN) 16113349 (eISSN)

Vol. 13145 LNCS 60-75
9783030948757 (ISBN)

18th International Conference on Distributed Computing and Intelligent Technology, ICDCIT 2022
Bhubaneswar, India,

WASP SAS: Structuring data for continuous processing and ML systems

Wallenberg AI, Autonomous Systems and Software Program, 2018-01-01 -- 2023-01-01.

Areas of Advance

Information and Communication Technology

Subject Categories

Computational Mathematics

Computer Science

Computer Systems

DOI

10.1007/978-3-030-94876-4_4

More information

Latest update

11/25/2022