The Impact of Synchronization in Parallel Stochastic Gradient Descent
Paper i proceeding, 2022

In this paper, we discuss our and related work in the domain of efficient parallel optimization, using Stochastic Gradient Descent, for fast and stable convergence in prominent machine learning applications. We outline the results in the context of aspects and challenges regarding synchronization, consistency, staleness and parallel-aware adaptiveness, focusing on the impact on the overall convergence.

Stochastic gradient descent

Machine Learning

Lock-free

Författare

Karl Bäckström

Nätverk och System

Marina Papatriantafilou

Nätverk och System

Philippas Tsigas

Nätverk och System

Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)

03029743 (ISSN) 16113349 (eISSN)

Vol. 13145 LNCS 60-75
9783030948757 (ISBN)

18th International Conference on Distributed Computing and Intelligent Technology, ICDCIT 2022
Bhubaneswar, India,

Ämneskategorier

Beräkningsmatematik

Datavetenskap (datalogi)

Datorsystem

DOI

10.1007/978-3-030-94876-4_4

Mer information

Senast uppdaterat

2022-02-24