A Deep-Reinforcement-Learning Approach to the Peg-in-Hole Task with Goal Uncertainties
dc.contributor.author | Rouillard, Thibault | |
dc.contributor.supervisor | Lei Cui | en_US |
dc.contributor.supervisor | Ian Howard | en_US |
dc.date.accessioned | 2021-10-05T05:39:11Z | |
dc.date.available | 2021-10-05T05:39:11Z | |
dc.date.issued | 2020 | en_US |
dc.identifier.uri | http://hdl.handle.net/20.500.11937/85887 | |
dc.description.abstract |
The thesis proposed a framework to train deep-reinforcement-learning agents for fine manipulation tasks with goal uncertainties. It consisted of three aspects: state-space formulation, artificial training-goals uncertainties, and progressive training. The framework was used in a simulation for two fine manipulation tasks, square Peg-in-Hole and round Peg-in-Hole. The resulting behaviours were then transferred a physical robotic manipulator and compared to traditional training methods. The deep-reinforcement-learning agents trained using the framework in this work outperformed those trained with definite goals. | en_US |
dc.publisher | Curtin University | en_US |
dc.title | A Deep-Reinforcement-Learning Approach to the Peg-in-Hole Task with Goal Uncertainties | en_US |
dc.type | Thesis | en_US |
dcterms.educationLevel | PhD | en_US |
curtin.department | School of Civil and Mechanical Engineering | en_US |
curtin.accessStatus | Open access | en_US |
curtin.faculty | Science and Engineering | en_US |
curtin.contributor.orcid | Rouillard, Thibault [0000-0002-2465-7678] | en_US |