Distributed nonconvex optimization: Algorithms and convergence analysis

Date
2017-01-01
Authors
Hajinezhad, Davood
Journal Title
Journal ISSN
Volume Title
Publisher
Altmetrics
Authors
Research Projects
Organizational Units
Journal Issue
Series
Abstract

This thesis addresses the problem of distributed optimization and learning over multi-agent networks. Our main focus is to design efficient algorithms for a class of nonconvex problems, defined over networks in which each agent/node only has partial knowledge about the entire problem. Multi-agent nonconvex optimization has gained much attention recently due to its wide applications in big data analysis, sensor networks, signal processing, multi-agent network, resource allocation, communication networks, just to name a few. In this work, we develop a general class of primal-dual algorithms for distributed optimization problems in challenging setups, such as nonconvexity in loss functions, nonsmooth regularizations, and coupling constraints. Further, we consider different setup where each agent can only access the zeroth-order information (i.e., the functional values) of its local functions. Rigorous convergence and rate of convergence analysis is provided for the proposed algorithms. Our work represents one of the first attempts to address nonconvex optimization and learning over networks.

Description
Keywords
Convergence analysis, Distributed optimization, Nonconvex optimization, Rate of convergence, Zeroth-order optimization
Citation
Source