Distributed Model Training based on Data Parallelism in Edge Computing-enabled Elastic Optical Networks
Artikel i vetenskaplig tidskrift, 2021

IEEE The emergence of edge computing provides an effective solution to execute distributed model training (DMT). The deployment of training data among edge nodes affects the training efficiency and network resource usage. This letter aims for the efficient provisioning of DMT services by optimizing the partition and distribution of training data in edge computing-enabled optical networks. An integer linear programming (ILP) model and a data parallelism deployment algorithm (DPDA) are proposed to solve this problem. The performance of the proposed approaches is evaluated through simulation. Simulation results show that the proposed algorithm can deploy more DMT services compared with benchmark.

edge computing

Parallel processing

optical networks

distributed model training

Training

Task analysis

Data models

Data parallelism

Training data

Computational modeling

Optical fiber networks

Författare

Yajie Li

Beijing University of Posts and Telecommunications (BUPT)

Zebin Zeng

Beijing University of Posts and Telecommunications (BUPT)

Jun Li

Chalmers, Elektroteknik, Kommunikation, Antenner och Optiska Nätverk

Boyuan Yan

Beijing University of Posts and Telecommunications (BUPT)

Yongli Zhao

Beijing University of Posts and Telecommunications (BUPT)

Jie Zhang

Beijing University of Posts and Telecommunications (BUPT)

IEEE Communications Letters

1089-7798 (ISSN) 15582558 (eISSN)

Vol. 25 4 1241-1244 9274363

Ämneskategorier

Datorteknik

Kommunikationssystem

Datavetenskap (datalogi)

DOI

10.1109/LCOMM.2020.3041453

Mer information

Senast uppdaterat

2021-05-19