Stochastic diagonal approximate greatest descent in neural networks
MetadataShow full item record
© 2017 IEEE. Optimization is important in neural networks to iteratively update weights for pattern classification. Existing optimization techniques suffer from suboptimal local minima and slow convergence rate. In this paper, stochastic diagonal Approximate Greatest Descent (SDAGD) algorithm is proposed to optimize neural network weights using multi-stage backpropagation manner. SDAGD is derived from the operation of a multi-stage decision control system. It uses the concept of control system consisting of: (1) when the local search region does not contain a minimum point, the iteration shall be defined at the boundary of the local search region, (2) when the local region contains a minimum point, the Newton method is used to search for the optimum solution. The implementation of SDAGD on Multilayer perceptron (MLP) is investigated with the goal of improving the learning ability and structural simplicity. Simulation results showed that two layer MLP with SDAGD achieved a misclassification rate of 4.7% on MNIST dataset.
Showing items related by title, author, creator and subject.
Alzahrani, Mojib Othman (2012)This research investigates reasons for differences in quality between advertisements created by local and international advertising agencies operating in Saudi Arabia. It focuses on the investment in, and use of, computer ...
Rusmin, Rusmin; Astami, Emita; Scully, Glennda (2014)This study examines the outcome of decentralisation reforms in Indonesia, focusing on the association between demographic characteristics and differences in the financial condition of local governments units. It investigates ...
Joseph, Corina (2010)This thesis examines the extent of sustainability reporting on Malaysian local authority websites. The use of websites by government in Malaysia is closely associated with the public service administrative reforms. The ...