Convex Optimization for Machine Learning over Graphs: From Collaborative Learning to Node Classification
Doctoral thesis, 2025
Machine learning and optimization are deeply interconnected fields, with optimization forming the backbone of most machine learning methods. This thesis explores two directions in which optimization contributes to machine learning. The first concerns the design and analysis of distributed optimization algorithms for efficient training of machine learning models. The second focuses on the development of a convex optimization framework for node classification that jointly leverages node features and graph structure.
In the first part of this thesis, we introduce DAGP, a decentralized optimization algorithm for peer-to-peer communication networks, enabling private and scalable training at the network edge. To address stragglers, we propose ASY-DAGP, an asynchronous extension that still converges to the optimal solution under mild conditions. In applications such as decentralized federated learning, where data distributions across agents are heterogeneous, we show that our constrained framework naturally supports personalization, allowing for personalized variants of DAGP. Finally, we introduce Linear Quadratic Performance Estimation Problem (LQ-PEP), a new convergence analysis methodology that systematically derives convergence rates without relying on manually crafted Lyapunov functions.
In the second part of this thesis, we propose a novel optimization framework for transductive node classification that integrates graph clustering with a regularization term based on node features. To the best of our knowledge, this is the first work to theoretically demonstrate the synergistic effect of combining these two sources of information, i.e., the graph structure and the node features. We prove that perfect label recovery can be achieved under milder conditions than when using either source alone.
Decentralized federated learning
Convex optimization
Constrained optimization
Collaborative learning
Node classification
Distributed optimization
Convergence analysis
Convex clustering
Author
Firooz Shahriari Mehr
Data Science and AI 3
Double Averaging and Gradient Projection: Convergence Guarantees for Decentralized Constrained Optimization
IEEE Transactions on Automatic Control,;Vol. 70(2025)p. 3433-3440
Journal article
Asynchronous Decentralized Optimization with Constraints: Achievable Speeds of Convergence for Directed Graphs
Proceedings of Machine Learning Research,;Vol. 258(2025)p. 2575-2583
Paper in proceeding
Decentralized Constrained Optimization: Double Averaging and Gradient Projection
Proceedings of the IEEE Conference on Decision and Control,;Vol. 2021-December(2021)p. 2400-2406
Paper in proceeding
Firooz Shahriari-Mehr, Javad Aliakbari, Alexandre Graell i Amat, and Ashkan Panahi, Bounds on Perfect Node Classification: A Convex Graph Clustering Perspective
The first part of the thesis develops new methods that allow many devices to train a shared model together, even when communication is limited or unreliable. The second part presents a new approach for predicting missing labels in a graph by jointly using both the connections in the graph and the information attached to each data point. The results show that combining these two sources of information leads to more reliable predictions and improved performance under milder conditions than previously known.
Efficient Data Representation and Machine Learning over Next Generation Networks
Wallenberg AI, Autonomous Systems and Software Program, 2021-01-01 -- .
Areas of Advance
Information and Communication Technology
Subject Categories (SSIF 2025)
Computer Sciences
Computer Systems
Infrastructure
C3SE (-2020, Chalmers Centre for Computational Science and Engineering)
DOI
10.63959/chalmers.dt/5789
ISBN
978-91-8103-332-8
Doktorsavhandlingar vid Chalmers tekniska högskola. Ny serie: 5789
Publisher
Chalmers
EDIT Building, EE room
Opponent: Professor Stefan Werner, Norwegian University of Science and Technology, Norway