Convex Optimization for Machine Learning over Graphs: From Collaborative Learning to Node Classification
Doktorsavhandling, 2025


Machine learning and optimization are deeply interconnected fields, with optimization forming the backbone of most machine learning methods. This thesis explores two directions in which optimization contributes to machine learning. The first concerns the design and analysis of distributed optimization algorithms for efficient training of machine learning models. The second focuses on the development of a convex optimization framework for node classification that jointly leverages node features and graph structure.

In the first part of this thesis, we introduce DAGP, a decentralized optimization algorithm for peer-to-peer communication networks, enabling private and scalable training at the network edge. To address stragglers, we propose ASY-DAGP, an asynchronous extension that still converges to the optimal solution under mild conditions. In applications such as decentralized federated learning, where data distributions across agents are heterogeneous, we show that our constrained framework naturally supports personalization, allowing for personalized variants of DAGP. Finally, we introduce Linear Quadratic Performance Estimation Problem (LQ-PEP), a new convergence analysis methodology that systematically derives convergence rates without relying on manually crafted Lyapunov functions.

In the second part of this thesis, we propose a novel optimization framework for transductive node classification that integrates graph clustering with a regularization term based on node features. To the best of our knowledge, this is the first work to theoretically demonstrate the synergistic effect of combining these two sources of information, i.e., the graph structure and the node features. We prove that perfect label recovery can be achieved under milder conditions than when using either source alone.

Decentralized federated learning

Convex optimization

Constrained optimization

Collaborative learning

Node classification

Distributed optimization

Convergence analysis

Convex clustering

EDIT Building, EE room
Opponent: Professor Stefan Werner, Norwegian University of Science and Technology, Norway

Författare

Firooz Shahriari Mehr

Data Science och AI 3

Double Averaging and Gradient Projection: Convergence Guarantees for Decentralized Constrained Optimization

IEEE Transactions on Automatic Control,;Vol. 70(2025)p. 3433-3440

Artikel i vetenskaplig tidskrift

Asynchronous Decentralized Optimization with Constraints: Achievable Speeds of Convergence for Directed Graphs

Proceedings of Machine Learning Research,;Vol. 258(2025)p. 2575-2583

Paper i proceeding

Decentralized Constrained Optimization: Double Averaging and Gradient Projection

Proceedings of the IEEE Conference on Decision and Control,;Vol. 2021-December(2021)p. 2400-2406

Paper i proceeding

Firooz Shahriari-Mehr, Javad Aliakbari, Alexandre Graell i Amat, and Ashkan Panahi, Bounds on Perfect Node Classification: A Convex Graph Clustering Perspective

Machine learning has become a central part of modern technology, and at the heart of every machine learning method lies an optimization problem. This thesis investigates how convex optimization can improve learning in settings where data is naturally connected through graphs. In this work, graphs appear in two different ways: first, when many devices each store their own data and are linked through a communication network represented by a graph (the collaborative learning problem), and second, when the relationships between data points themselves form a graph (the node classification problem). 

The first part of the thesis develops new methods that allow many devices to train a shared model together, even when communication is limited or unreliable. The second part presents a new approach for predicting missing labels in a graph by jointly using both the connections in the graph and the information attached to each data point. The results show that combining these two sources of information leads to more reliable predictions and improved performance under milder conditions than previously known.

Effektiv datarepresentation och maskininlärning över nästa generationsnätverk

Wallenberg AI, Autonomous Systems and Software Program, 2021-01-01 -- .

Styrkeområden

Informations- och kommunikationsteknik

Ämneskategorier (SSIF 2025)

Datavetenskap (datalogi)

Datorsystem

Infrastruktur

C3SE (-2020, Chalmers Centre for Computational Science and Engineering)

DOI

10.63959/chalmers.dt/5789

ISBN

978-91-8103-332-8

Doktorsavhandlingar vid Chalmers tekniska högskola. Ny serie: 5789

Utgivare

Chalmers

EDIT Building, EE room

Online

Opponent: Professor Stefan Werner, Norwegian University of Science and Technology, Norway

Mer information

Senast uppdaterat

2025-11-28