Distributed Algorithms for Solving Linear Algebraic Equations & Convex Optimization Problems

Loading...
Thumbnail Image
Date
Authors
Jahvani, Mohammad
Keyword
Distributed Convex Optimization , Linear Algebraic Equations , Consensus , Least-Squares Solution , Distributed Algorithm
Abstract
This thesis investigates two problems in the area of distributed computation and decision-making over multi-agent networks: the problem of finding a solution for linear algebraic equations in a distributed way, and the distributed convex optimization problem. We develop several consensus-based continuous-time coordination algorithms to compute a least-squares solution to a linear algebraic equation of the form $Ax = b$. This system of linear equations is simultaneously solved by a group of $n$ agents, while each agent knows a proper subset of the rows of the augmented matrix $ [A \, b] $. In other words, it is assumed that no agent has access to the information needed to solve the problem independently. We show that, along the flows of the proposed algorithms, the local estimate of each agent converges exponentially to the exact (unique) least-squares solution, provided that the underlying communication network is fixed and strongly connected, and $A$ has full column rank. Then, we consider the case where each agent is equipped with a private differentiable convex cost function. The goal of the agents is to collaboratively find the optimum of the sum of all local costs through peer-to-peer communications. We assume that the communication topology is fixed and strongly connected, and the local cost functions are strongly convex with globally-Lipschitz gradients. First, we propose two modified distributed gradient dynamics to solve the distributed convex optimization problem. These algorithms are inspired by the well known distributed subgradient method. We show that the proposed network flows are guaranteed to converge asymptotically to the global minimizer under given conditions. Next, we introduce two saddle-point type distributed algorithms to improve the rate of convergence of the aforementioned algorithms at no extra communication cost. This results in an approximate saddle-point algorithm with a guaranteed exponential rate of convergence for distributed convex optimization over arbitrary strongly connected directed network.
External DOI