Stochastic diagonal approximate greatest descent in neural networks
MetadataShow full item record
© 2017 IEEE. Optimization is important in neural networks to iteratively update weights for pattern classification. Existing optimization techniques suffer from suboptimal local minima and slow convergence rate. In this paper, stochastic diagonal Approximate Greatest Descent (SDAGD) algorithm is proposed to optimize neural network weights using multi-stage backpropagation manner. SDAGD is derived from the operation of a multi-stage decision control system. It uses the concept of control system consisting of: (1) when the local search region does not contain a minimum point, the iteration shall be defined at the boundary of the local search region, (2) when the local region contains a minimum point, the Newton method is used to search for the optimum solution. The implementation of SDAGD on Multilayer perceptron (MLP) is investigated with the goal of improving the learning ability and structural simplicity. Simulation results showed that two layer MLP with SDAGD achieved a misclassification rate of 4.7% on MNIST dataset.
Showing items related by title, author, creator and subject.
Tan, H.; Lim, Hann; Harno, H. (2017)© 2017 IEEE. Deep structured of Convolutional Neural Networks (CNN) has recently gained intense attention in development due to its good performance in object recognition. One of the crucial components in CNN is the ...
Tan, H.; Lim, Hann; Harno, H. (2017)© 2017 IEEE. Stochastic Diagonal Approximate Greatest Descent (SDAGD) is proposed to manage the optimization in two stages, (a) apply a radial boundary to estimate step length when the weights are far from solution, (b) ...
Alzahrani, Mojib Othman (2012)This research investigates reasons for differences in quality between advertisements created by local and international advertising agencies operating in Saudi Arabia. It focuses on the investment in, and use of, computer ...