Gradient-free method for nonsmooth distributed optimization
Access Status
Authors
Date
2015Type
Metadata
Show full item recordCitation
Source Title
ISSN
School
Collection
Abstract
In this paper, we consider a distributed nonsmooth optimization problem over a computational multi-agent network. We first extend the (centralized) Nesterov’s random gradient-free algorithm and Gaussian smoothing technique to the distributed case. Then, the convergence of the algorithm is proved. Furthermore, an explicit convergence rate is given in terms of the network size and topology. Our proposed method is free of gradient, which may be preferred by practical engineers. Since only the cost function value is required, our method may suffer a factor up to d (the dimension of the agent) in convergence rate over that of the distributed subgradient-based methods in theory. However, our numerical simulations show that for some nonsmooth problems, our method can even achieve better performance than that of subgradient-based methods, which may be caused by the slow convergence in the presence of subgradient.
Related items
Showing items related by title, author, creator and subject.
-
Li, J.; Li, G.; Wu, Z.; Wu, Changzhi; Wang, X.; Lee, J.; Jung, K. (2017)In this paper we consider the minimization of the sum of local convex component functions distributed over a multi-agent network. We first extend the Nesterov's random gradient-free method to the incremental setting. Then ...
-
Li, J.; Wu, Changzhi; Wu, Z.; Long, Q.; Wang, Xiangyu (2014)We consider a distributed optimization problem over a multi-agent network, in which the sum of several local convex objective functions is minimized subject to global convex inequality constraints. We first transform the ...
-
Zhao, C.; Chen, S.; Wu, Changzhi; Chen, F.; Ji, Y. (2018)© 2013 IEEE. Network utility maximization has been widely adopted to allocate the resource of networks. However, it suffers from slow convergence under distributed computational environment. This paper proposes a fast ...