SLAM Using 3D reconstruction via a visual RGB and RGB-D sensory input
dc.contributor.author | Wurdemann, H. | |
dc.contributor.author | Georgiou, E. | |
dc.contributor.author | Cui, Lei | |
dc.contributor.author | Dai, J. | |
dc.contributor.editor | Primo Zingaretti | |
dc.date.accessioned | 2017-01-30T12:30:00Z | |
dc.date.available | 2017-01-30T12:30:00Z | |
dc.date.created | 2013-09-05T20:00:25Z | |
dc.date.issued | 2011 | |
dc.identifier.citation | Wurdemann, Helge A. and Georgiou, Evangelos and Cui, Lei and Dai, Jian S. 2011. SLAM Using 3D reconstruction via a visual RGB and RGB-D sensory input, in ASME/IEEE International Conference on Mechatronic and Embedded Systems and Applications, Aug 28-31 2011. Washington DC: American Society of Mechanical Engineers. | |
dc.identifier.uri | http://hdl.handle.net/20.500.11937/22225 | |
dc.identifier.doi | 10.1115/DETC2011-47735 | |
dc.description.abstract |
This paper investigates simultaneous localization and mapping (SLAM) problem by exploiting the Microsoft Kinect™ sensor array and an autonomous mobile robot capable of self-localization. The combination of them covers the major features of SLAM including mapping, sensing, locating, and modeling. The Kinect™ sensor array provides a dual camera output of RGB, using a CMOS camera, and RGB-D, using a depth camera. The sensors will be mounted on the KCLBOT, an autonomous nonholonomic two wheel maneuverable mobile robot. The mobile robot platform has the ability to self-localize and preform navigation maneuvers to traverse to set target points using intelligent processes. The target point for this operation is a fixed coordinate position, which will be the goal for the mobile robot to reach, taking into consideration the obstacles in the environment which will be represented in a 3D spatial model. Extracting the images from the sensor after a calibration routine, a 3D reconstruction of the traversable environment is produced for the mobile robot to navigate. Using the constructed 3D model the autonomous mobile robot follows a polynomial-based nonholonomic trajectory with obstacle avoidance. The experimental results demonstrate the cost effectiveness of this off the shelf sensor array. The results show the effectiveness to produce a 3D reconstruction of an environment and the feasibility of using the Microsoft Kinect™ sensor for mapping, sensing, locating, and modeling, that enables the implementation of SLAM on this type of platform. | |
dc.publisher | ASME Press | |
dc.subject | Kinect | |
dc.subject | 3D reconstruction | |
dc.subject | SLAM | |
dc.title | SLAM Using 3D reconstruction via a visual RGB and RGB-D sensory input | |
dc.type | Conference Paper | |
dcterms.source.startPage | 615 | |
dcterms.source.endPage | 622 | |
dcterms.source.title | Proceedings of the 2011 ASME/IEEE International Conference on Mechatronic and Embedded Systems and Applications | |
dcterms.source.series | Proceedings of the 2011 ASME/IEEE International Conference on Mechatronic and Embedded Systems and Applications | |
dcterms.source.isbn | 978-0-7918-5480-8 | |
dcterms.source.conference | ASME/IEEE International Conference on Mechatronic and Embedded Systems and ApplicationsInformation in Engineering Conference | |
dcterms.source.conference-start-date | Aug 28 2011 | |
dcterms.source.conferencelocation | Washington DC | |
dcterms.source.place | New York | |
curtin.note |
Published in: ASME 2011 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference. Volume 3: 2011 ASME/IEEE International Conference on Mechatronic and Embedded Systems and Applications, Parts A and B. Copyright © 2011 by ASME. | |
curtin.department | ||
curtin.accessStatus | Fulltext not available |