Show simple item record

dc.contributor.authorLi, J.
dc.contributor.authorLi, G.
dc.contributor.authorWu, Z.
dc.contributor.authorWu, Changzhi
dc.contributor.authorWang, X.
dc.contributor.authorLee, J.
dc.contributor.authorJung, K.
dc.identifier.citationLi, J. and Li, G. and Wu, Z. and Wu, C. and Wang, X. and Lee, J. and Jung, K. 2017. Incremental gradient-free method for nonsmooth distributed optimization. Journal of Industrial and management optimization. 13 (4): pp. 1841-1857.

In this paper we consider the minimization of the sum of local convex component functions distributed over a multi-agent network. We first extend the Nesterov's random gradient-free method to the incremental setting. Then we propose the incremental gradient-free methods, including a cyclic order and a randomized order in the selection of component function. We provide the convergence and iteration complexity analysis of the proposed methods under some suitable stepsize rules. To illustrate our proposed methods, extensive numerical results on a distributed l 1 -regression problem are presented. Compared with existing incremental subgradient-based methods, our methods only require the evaluation of the function values rather than subgradients, which may be preferred by practical engineers.

dc.publisherAmerican Institute of Mathematical Sciences
dc.titleIncremental gradient-free method for nonsmooth distributed optimization
dc.typeJournal Article
dcterms.source.titleJournal of Industrial and management optimization
curtin.departmentDepartment of Construction Management
curtin.accessStatusOpen access via publisher

Files in this item


There are no files associated with this item.

This item appears in the following Collection(s)

Show simple item record