Adaptive Second-order Derivative Approximate Greatest Descent Optimization for Deep Learning Neural Networks
Access Status
Open access
Date
2019Supervisor
Hann Lim
Type
Thesis
Award
PhD
Metadata
Show full item recordFaculty
Curtin Malaysia
School
Curtin Malaysia
Collection
Abstract
Backpropagation using Stochastic Diagonal Approximate Greatest Descent (SDAGD) is a novel adaptive second-order derivative optimization method in updating weights of deep learning neural networks. SDAGD applies two-phase switching strategy to seek for solution at far using long-term optimal trajectory and automatically switch to Newton method when nearer to optimal solution. SDAGD has the advantages of steepest training roll-off rate, adaptive adjustment of step-length and the ability to deal with vanishing gradient issues in deep architecture.
Related items
Showing items related by title, author, creator and subject.
-
Li, Bin (2011)In this thesis, we consider several types of optimal control problems with constraints on the state and control variables. These problems have many engineering applications. Our aim is to develop efficient numerical methods ...
-
Yu, Changjun (2012)In this thesis, We propose new computational algorithms and methods for solving four classes of constrained optimization and optimal control problems. In Chapter 1, we present a brief review on optimization and ...
-
Zhou, Jingyang (2011)In this thesis, we deal with several optimal guidance and control problems of the spacecrafts arising from the study of lunar exploration. The research is composed of three parts: 1. Optimal guidance for the lunar module ...