Radial effect in stochastic diagonal approximate greatest descent
|dc.identifier.citation||Tan, H. and Lim, H. and Harno, H. 2017. Radial effect in stochastic diagonal approximate greatest descent, pp. 226-229.|
© 2017 IEEE. Stochastic Diagonal Approximate Greatest Descent (SDAGD) is proposed to manage the optimization in two stages, (a) apply a radial boundary to estimate step length when the weights are far from solution, (b) apply Newton method when the weights are within the solution level set. This is inspired by a multi-stage decision control system where different strategies is used at different conditions. In numerical optimization context, larger steps should be taken at the beginning of optimization and gradually reduced when it is near to the minimum point. Nevertheless, the intuition of determining the radial boundary when the optimized parameters are far from the solution is yet to be investigated for high dimensional data. Radial step length in SDAGD manipulates the relative step length for iteration construction. SDAGD is implemented in a two layer Multilayer Perceptron to evaluate the effects of R on artificial neural networks. It is concluded that the greater the value of R, the higher the learning rate of SDAGD algorithm when the value of R is constrained in between 100 to 10,000.
|dc.title||Radial effect in stochastic diagonal approximate greatest descent|
|dcterms.source.title||Proceedings of the 2017 IEEE International Conference on Signal and Image Processing Applications, ICSIPA 2017|
|dcterms.source.series||Proceedings of the 2017 IEEE International Conference on Signal and Image Processing Applications, ICSIPA 2017|
|curtin.accessStatus||Fulltext not available|
Files in this item
There are no files associated with this item.