Incremental gradient-free method for nonsmooth distributed optimization
Access Status
Authors
Date
2017Type
Metadata
Show full item recordCitation
Source Title
ISSN
School
Collection
Abstract
In this paper we consider the minimization of the sum of local convex component functions distributed over a multi-agent network. We first extend the Nesterov's random gradient-free method to the incremental setting. Then we propose the incremental gradient-free methods, including a cyclic order and a randomized order in the selection of component function. We provide the convergence and iteration complexity analysis of the proposed methods under some suitable stepsize rules. To illustrate our proposed methods, extensive numerical results on a distributed l 1 -regression problem are presented. Compared with existing incremental subgradient-based methods, our methods only require the evaluation of the function values rather than subgradients, which may be preferred by practical engineers.
Related items
Showing items related by title, author, creator and subject.
-
Lazarescu, Mihai M. (2000)In this thesis we present an incremental learning algorithm for learning and classifying the pattern of movement of multiple objects in a dynamic scene. The method that we describe is based on symbolic representations of ...
-
Ng, Shu; Mclachlan, G.; Lee, Andy (2006)Objective: Inpatient length of stay (LOS) is an important measure of hospital activity, health care resource consumption, and patient acuity. This research work aims at developing an incremental expectation maximization ...
-
Ciketic, S.; Hayatbakhsh, R.; McKetin, Rebecca; Doran, C.; Najman, J. (2015)Introduction and aims: Illicit methamphetamine (MA) use is an important public health concern. There is a dearth of knowledge about effective and cost-effective treatments for methamphetamine (MA) dependence in Australia. ...