Show simple item record

dc.contributor.authorTan, Hong Hui
dc.contributor.supervisorHann Limen_US
dc.date.accessioned2020-02-19T05:02:20Z
dc.date.available2020-02-19T05:02:20Z
dc.date.issued2019en_US
dc.identifier.urihttp://hdl.handle.net/20.500.11937/77991
dc.description.abstract

Backpropagation using Stochastic Diagonal Approximate Greatest Descent (SDAGD) is a novel adaptive second-order derivative optimization method in updating weights of deep learning neural networks. SDAGD applies two-phase switching strategy to seek for solution at far using long-term optimal trajectory and automatically switch to Newton method when nearer to optimal solution. SDAGD has the advantages of steepest training roll-off rate, adaptive adjustment of step-length and the ability to deal with vanishing gradient issues in deep architecture.

en_US
dc.publisherCurtin Universityen_US
dc.titleAdaptive Second-order Derivative Approximate Greatest Descent Optimization for Deep Learning Neural Networksen_US
dc.typeThesisen_US
dcterms.educationLevelPhDen_US
curtin.departmentCurtin Malaysiaen_US
curtin.accessStatusOpen accessen_US
curtin.facultyCurtin Malaysiaen_US
curtin.contributor.orcidTan, Hong Hui [0000-0003-0633-1292]


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record