Show simple item record

dc.contributor.authorRouillard, Thibault
dc.contributor.supervisorLei Cuien_US
dc.contributor.supervisorIan Howarden_US

The thesis proposed a framework to train deep-reinforcement-learning agents for fine manipulation tasks with goal uncertainties. It consisted of three aspects: state-space formulation, artificial training-goals uncertainties, and progressive training. The framework was used in a simulation for two fine manipulation tasks, square Peg-in-Hole and round Peg-in-Hole. The resulting behaviours were then transferred a physical robotic manipulator and compared to traditional training methods. The deep-reinforcement-learning agents trained using the framework in this work outperformed those trained with definite goals.

dc.publisherCurtin Universityen_US
dc.titleA Deep-Reinforcement-Learning Approach to the Peg-in-Hole Task with Goal Uncertaintiesen_US
curtin.departmentSchool of Civil and Mechanical Engineeringen_US
curtin.accessStatusOpen accessen_US
curtin.facultyScience and Engineeringen_US
curtin.contributor.orcidRouillard, Thibault [0000-0002-2465-7678]en_US

Files in this item


This item appears in the following Collection(s)

Show simple item record