Double Averaging and Gradient Projection: Convergence Guarantees for Decentralized Constrained Optimization
Artikel i vetenskaplig tidskrift, 2024

We consider a generic decentralized constrained optimization problem over static, directed communication networks, where each agent has exclusive access to only one convex, differentiable, local objective term and one convex constraint set. For this setup, we propose a novel decentralized algorithm, called DAGP (Double Averaging and Gradient Projection). We achieve global optimality through a novel distributed tracking technique we call distributed null projection. Further, we show that DAGP can be used to solve unconstrained problems with non-differentiable objective terms with a problem reduction scheme. Assuming only smoothness of the objective terms, we study the convergence of DAGP and establish sub-linear rates of convergence in terms of feasibility, consensus, and optimality, with no extra assumption (e.g. strong convexity). For the analysis, we forego the difficulties of selecting Lyapunov functions by proposing a new methodology of convergence analysis, which we refer to as aggregate lower-bounding. To demonstrate the generality of this method, we also provide an alternative convergence proof for the standard gradient descent algorithm with smooth functions.

convergence analysis

Constrained optimization

convex optimization

distributed optimization

Författare

Firooz Shahriari Mehr

Chalmers, Data- och informationsteknik, Data Science och AI

Ashkan Panahi

Chalmers, Data- och informationsteknik, Data Science och AI

IEEE Transactions on Automatic Control

0018-9286 (ISSN) 1558-2523 (eISSN)

Vol. In Press

Ämneskategorier (SSIF 2011)

Beräkningsmatematik

Reglerteknik

DOI

10.1109/TAC.2024.3520513

Mer information

Senast uppdaterat

2025-01-10