Robust Federated Learning against Low-quality and Corrupted Data
Federated learning (FL) brings great potentials for privacy-preserving machine learning (ML), but its accuracy is degraded significantly by clients’ low-quality and/or corrupted data. For highly skewed non-IID data, the accuracy reduces by up to ~55% for neural networks. What’s worse, some malicious clients may intentionally generate corrupted data and attack the training process, which, if successful, will poison the model and possibly make learning accuracy down to ~0%.The main objective of this project is to improve the robustness of FL against low-quality and corrupted data. Specifically, we will focus on cognitive client selection strategies to assure the high-quality and trusted data can be fully utilized.
Jun Li (contact)
Post doc at Chalmers, Electrical Engineering, Communication and Antenna Systems, Optical Networks
Professor at Chalmers, Electrical Engineering, Communication and Antenna Systems, Optical Networks
Funding Chalmers participation during 2020–2021