AF-DNDF: Asynchronous Federated Learning of Deep Neural Decision Forests
Paper in proceeding, 2021

In recent years, with more edge devices being put into use, the amount of data that is created, transmitted and stored is increasing exponentially. Moreover, due to the development of machine learning algorithms, modern software-intensive systems are able to take advantage of the data to further improve their service quality. However, it is expensive and inefficient to transmit large amounts of data to a central location for the purpose of training and deploying machine learning models. Data transfer from edge devices across the globe to central locations may also raise privacy and concerns related to local data regulations. As a distributed learning approach, Federated Learning has been introduced to tackle those challenges. Since Federated Learning simply exchanges locally trained machine learning models rather than the entire data set throughout the training process, the method not only protects user data privacy but also improves model training efficiency. In this paper, we have investigated an advanced machine learning algorithm, Deep Neural Decision Forests (DNDF), which unites classification trees with the representation learning functionality from deep convolutional neural networks. In this paper, we propose a novel algorithm, AF-DNDF which extends DNDF with an asynchronous federated aggregation protocol. Based on the local quality of each classification tree, our architecture can select and combine the optimal groups of decision trees from multiple local devices. The introduction of the asynchronous protocol enables the algorithm to be deployed in the industrial context with heterogeneous hardware settings. Our AF-DNDF architecture is validated in an automotive industrial use case focusing on road objects recognition and demonstrated by an empirical experiment with two different data sets. The experimental results show that our AF-DNDF algorithm significantly reduces the communication overhead and accelerates model training speed without sacrificing model classification performance. The algorithm can reach the same classification accuracy as the commonly used centralized machine learning methods but also greatly improve local edge model quality.

Machine Learning

Federated Learning

Deep Neural Decision Forests

Author

Hongyi Zhang

Testing, Requirements, Innovation and Psychology

Jan Bosch

Testing, Requirements, Innovation and Psychology

Helena Holmström Olsson

Malmö university

Ashok Chaitanya Koppisetty

Volvo

Proceedings - 2021 47th Euromicro Conference on Software Engineering and Advanced Applications, SEAA 2021

308-315
9781665427050 (ISBN)

47th Euromicro Conference on Software Engineering and Advanced Applications, SEAA 2021
Palermo, Italy,

Subject Categories

Other Computer and Information Science

Media Engineering

Computer Science

DOI

10.1109/SEAA53835.2021.00047

More information

Latest update

1/3/2024 9