Show simple item record

dc.contributor.authorRouillard, Thibault
dc.contributor.supervisorLei Cuien_US
dc.contributor.supervisorIan Howarden_US
dc.date.accessioned2021-10-05T05:39:11Z
dc.date.available2021-10-05T05:39:11Z
dc.date.issued2020en_US
dc.identifier.urihttp://hdl.handle.net/20.500.11937/85887
dc.description.abstract

The thesis proposed a framework to train deep-reinforcement-learning agents for fine manipulation tasks with goal uncertainties. It consisted of three aspects: state-space formulation, artificial training-goals uncertainties, and progressive training. The framework was used in a simulation for two fine manipulation tasks, square Peg-in-Hole and round Peg-in-Hole. The resulting behaviours were then transferred a physical robotic manipulator and compared to traditional training methods. The deep-reinforcement-learning agents trained using the framework in this work outperformed those trained with definite goals.

en_US
dc.publisherCurtin Universityen_US
dc.titleA Deep-Reinforcement-Learning Approach to the Peg-in-Hole Task with Goal Uncertaintiesen_US
dc.typeThesisen_US
dcterms.educationLevelPhDen_US
curtin.departmentSchool of Civil and Mechanical Engineeringen_US
curtin.accessStatusOpen accessen_US
curtin.facultyScience and Engineeringen_US
curtin.contributor.orcidRouillard, Thibault [0000-0002-2465-7678]en_US


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record