Show simple item record

dc.contributor.authorTan, H.
dc.contributor.authorLim, Hann
dc.contributor.authorHarno, H.
dc.identifier.citationTan, H. and Lim, H. and Harno, H. 2017. Stochastic diagonal approximate greatest descent in neural networks, pp. 1895-1898.

© 2017 IEEE. Optimization is important in neural networks to iteratively update weights for pattern classification. Existing optimization techniques suffer from suboptimal local minima and slow convergence rate. In this paper, stochastic diagonal Approximate Greatest Descent (SDAGD) algorithm is proposed to optimize neural network weights using multi-stage backpropagation manner. SDAGD is derived from the operation of a multi-stage decision control system. It uses the concept of control system consisting of: (1) when the local search region does not contain a minimum point, the iteration shall be defined at the boundary of the local search region, (2) when the local region contains a minimum point, the Newton method is used to search for the optimum solution. The implementation of SDAGD on Multilayer perceptron (MLP) is investigated with the goal of improving the learning ability and structural simplicity. Simulation results showed that two layer MLP with SDAGD achieved a misclassification rate of 4.7% on MNIST dataset.

dc.titleStochastic diagonal approximate greatest descent in neural networks
dc.typeConference Paper
dcterms.source.titleProceedings of the International Joint Conference on Neural Networks
dcterms.source.seriesProceedings of the International Joint Conference on Neural Networks
curtin.departmentCurtin Malaysia
curtin.accessStatusFulltext not available

Files in this item


There are no files associated with this item.

This item appears in the following Collection(s)

Show simple item record