Adaptive Second-order Derivative Approximate Greatest Descent Optimization for Deep Learning Neural Networks
dc.contributor.author | Tan, Hong Hui | |
dc.contributor.supervisor | Hann Lim | en_US |
dc.date.accessioned | 2020-02-19T05:02:20Z | |
dc.date.available | 2020-02-19T05:02:20Z | |
dc.date.issued | 2019 | en_US |
dc.identifier.uri | http://hdl.handle.net/20.500.11937/77991 | |
dc.description.abstract |
Backpropagation using Stochastic Diagonal Approximate Greatest Descent (SDAGD) is a novel adaptive second-order derivative optimization method in updating weights of deep learning neural networks. SDAGD applies two-phase switching strategy to seek for solution at far using long-term optimal trajectory and automatically switch to Newton method when nearer to optimal solution. SDAGD has the advantages of steepest training roll-off rate, adaptive adjustment of step-length and the ability to deal with vanishing gradient issues in deep architecture. | en_US |
dc.publisher | Curtin University | en_US |
dc.title | Adaptive Second-order Derivative Approximate Greatest Descent Optimization for Deep Learning Neural Networks | en_US |
dc.type | Thesis | en_US |
dcterms.educationLevel | PhD | en_US |
curtin.department | Curtin Malaysia | en_US |
curtin.accessStatus | Open access | en_US |
curtin.faculty | Curtin Malaysia | en_US |
curtin.contributor.orcid | Tan, Hong Hui [0000-0003-0633-1292] |