Distributed Model Training based on Data Parallelism in Edge Computing-enabled Elastic Optical Networks
Artikel i vetenskaplig tidskrift, 2020

IEEE The emergence of edge computing provides an effective solution to execute distributed model training (DMT). The deployment of training data among edge nodes affects the training efficiency and network resource usage. This letter aims for the efficient provisioning of DMT services by optimizing the partition and distribution of training data in edge computing-enabled optical networks. An integer linear programming (ILP) model and a data parallelism deployment algorithm (DPDA) are proposed to solve this problem. The performance of the proposed approaches is evaluated through simulation. Simulation results show that the proposed algorithm can deploy more DMT services compared with benchmark.

distributed model training

Training

Parallel processing

edge computing

Task analysis

Data parallelism

Training data

Data models

optical networks

Computational modeling

Optical fiber networks

Författare

Yajie Li

Beijing University of Posts and Telecommunications (BUPT)

Zebin Zeng

Beijing University of Posts and Telecommunications (BUPT)

Jun Li

Chalmers, Elektroteknik, Kommunikations- och antennsystem, Optiska nätverk

Boyuan Yan

Beijing University of Posts and Telecommunications (BUPT)

Yongli Zhao

Beijing University of Posts and Telecommunications (BUPT)

Jie Zhang

Beijing University of Posts and Telecommunications (BUPT)

IEEE Communications Letters

1089-7798 (ISSN)

Vol. In Press

Ämneskategorier

Datorteknik

Kommunikationssystem

Datavetenskap (datalogi)

DOI

10.1109/LCOMM.2020.3041453

Mer information

Senast uppdaterat

2020-12-21