Show simple item record

dc.contributor.authorLi, J.
dc.contributor.authorWu, Changzhi
dc.contributor.authorWu, Z.
dc.contributor.authorLong, Q.
dc.identifier.citationLi, J. and Wu, C. and Wu, Z. and Long, Q. 2015. Gradient-free method for nonsmooth distributed optimization. Journal of Global Optimization. 61: pp. 325-340.

In this paper, we consider a distributed nonsmooth optimization problem over a computational multi-agent network. We first extend the (centralized) Nesterov’s random gradient-free algorithm and Gaussian smoothing technique to the distributed case. Then, the convergence of the algorithm is proved. Furthermore, an explicit convergence rate is given in terms of the network size and topology. Our proposed method is free of gradient, which may be preferred by practical engineers. Since only the cost function value is required, our method may suffer a factor up to d (the dimension of the agent) in convergence rate over that of the distributed subgradient-based methods in theory. However, our numerical simulations show that for some nonsmooth problems, our method can even achieve better performance than that of subgradient-based methods, which may be caused by the slow convergence in the presence of subgradient.

dc.subjectConvex optimization
dc.subjectDistributed algorithm
dc.subjectGradient-free method
dc.subjectGaussian smoothing
dc.titleGradient-free method for nonsmooth distributed optimization
dc.typeJournal Article
dcterms.source.titleJournal of Global Optimization
curtin.departmentDepartment of Construction Management
curtin.accessStatusFulltext not available

Files in this item


This item appears in the following Collection(s)

Show simple item record