Federated learning systems: Architecture alternatives
Paper in proceeding, 2020

Machine Learning (ML) and Artificial Intelligence (AI) have increasingly gained attention in research and industry. Federated Learning, as an approach to distributed learning, shows its potential with the increasing number of devices on the edge and the development of computing power. However, most of the current Federated Learning systems apply a single-server centralized architecture, which may cause several critical problems, such as the single-point of failure as well as scaling and performance problems. In this paper, we propose and compare four architecture alternatives for a Federated Learning system, i.e. centralized, hierarchical, regional and decentralized architectures. We conduct the study by using two well-known data sets and measuring several system performance metrics for all four alternatives. Our results suggest scenarios and use cases which are suitable for each alternative. In addition, we investigate the trade-off between communication latency, model evolution time and the model classification performance, which is crucial to applying the results into real-world industrial systems.

System Architecture

Federated Learning

Machine Learning

Artificial Intelligence

Author

Hongyi Zhang

Chalmers, Computer Science and Engineering (Chalmers), Software Engineering (Chalmers)

Jan Bosch

Chalmers, Computer Science and Engineering (Chalmers), Software Engineering (Chalmers)

Helena Holmström Olsson

Malmö university

Proceedings - Asia-Pacific Software Engineering Conference, APSEC

15301362 (ISSN)

Vol. 2020-December 385-394 9359305
9781728195537 (ISBN)

27th Asia-Pacific Software Engineering Conference
Singapore, Singapore,

Subject Categories

Other Computer and Information Science

Computer Science

Computer Systems

DOI

10.1109/APSEC51365.2020.00047

More information

Latest update

3/26/2021