Federated learning systems: Architecture alternatives
Paper i proceeding, 2020

Machine Learning (ML) and Artificial Intelligence (AI) have increasingly gained attention in research and industry. Federated Learning, as an approach to distributed learning, shows its potential with the increasing number of devices on the edge and the development of computing power. However, most of the current Federated Learning systems apply a single-server centralized architecture, which may cause several critical problems, such as the single-point of failure as well as scaling and performance problems. In this paper, we propose and compare four architecture alternatives for a Federated Learning system, i.e. centralized, hierarchical, regional and decentralized architectures. We conduct the study by using two well-known data sets and measuring several system performance metrics for all four alternatives. Our results suggest scenarios and use cases which are suitable for each alternative. In addition, we investigate the trade-off between communication latency, model evolution time and the model classification performance, which is crucial to applying the results into real-world industrial systems.

System Architecture

Federated Learning

Machine Learning

Artificial Intelligence

Författare

Hongyi Zhang

Chalmers, Data- och informationsteknik, Software Engineering

Jan Bosch

Chalmers, Data- och informationsteknik, Software Engineering

Helena Holmström Olsson

Malmö universitet

Proceedings - Asia-Pacific Software Engineering Conference, APSEC

15301362 (ISSN)

Vol. 2020-December 385-394 9359305
9781728195537 (ISBN)

27th Asia-Pacific Software Engineering Conference
Singapore, Singapore,

Ämneskategorier

Annan data- och informationsvetenskap

Datavetenskap (datalogi)

Datorsystem

DOI

10.1109/APSEC51365.2020.00047

Mer information

Senast uppdaterat

2021-03-26