Show simple item record

dc.contributor.authorWurdemann, H.
dc.contributor.authorGeorgiou, E.
dc.contributor.authorCui, Lei
dc.contributor.authorDai, J.
dc.contributor.editorPrimo Zingaretti
dc.date.accessioned2017-01-30T12:30:00Z
dc.date.available2017-01-30T12:30:00Z
dc.date.created2013-09-05T20:00:25Z
dc.date.issued2011
dc.identifier.citationWurdemann, Helge A. and Georgiou, Evangelos and Cui, Lei and Dai, Jian S. 2011. SLAM Using 3D reconstruction via a visual RGB and RGB-D sensory input, in ASME/IEEE International Conference on Mechatronic and Embedded Systems and Applications, Aug 28-31 2011. Washington DC: American Society of Mechanical Engineers.
dc.identifier.urihttp://hdl.handle.net/20.500.11937/22225
dc.identifier.doi10.1115/DETC2011-47735
dc.description.abstract

This paper investigates simultaneous localization and mapping (SLAM) problem by exploiting the Microsoft Kinect™ sensor array and an autonomous mobile robot capable of self-localization. The combination of them covers the major features of SLAM including mapping, sensing, locating, and modeling. The Kinect™ sensor array provides a dual camera output of RGB, using a CMOS camera, and RGB-D, using a depth camera. The sensors will be mounted on the KCLBOT, an autonomous nonholonomic two wheel maneuverable mobile robot. The mobile robot platform has the ability to self-localize and preform navigation maneuvers to traverse to set target points using intelligent processes. The target point for this operation is a fixed coordinate position, which will be the goal for the mobile robot to reach, taking into consideration the obstacles in the environment which will be represented in a 3D spatial model. Extracting the images from the sensor after a calibration routine, a 3D reconstruction of the traversable environment is produced for the mobile robot to navigate. Using the constructed 3D model the autonomous mobile robot follows a polynomial-based nonholonomic trajectory with obstacle avoidance. The experimental results demonstrate the cost effectiveness of this off the shelf sensor array. The results show the effectiveness to produce a 3D reconstruction of an environment and the feasibility of using the Microsoft Kinect™ sensor for mapping, sensing, locating, and modeling, that enables the implementation of SLAM on this type of platform.

dc.publisherASME Press
dc.subjectKinect
dc.subject3D reconstruction
dc.subjectSLAM
dc.titleSLAM Using 3D reconstruction via a visual RGB and RGB-D sensory input
dc.typeConference Paper
dcterms.source.startPage615
dcterms.source.endPage622
dcterms.source.titleProceedings of the 2011 ASME/IEEE International Conference on Mechatronic and Embedded Systems and Applications
dcterms.source.seriesProceedings of the 2011 ASME/IEEE International Conference on Mechatronic and Embedded Systems and Applications
dcterms.source.isbn978-0-7918-5480-8
dcterms.source.conferenceASME/IEEE International Conference on Mechatronic and Embedded Systems and ApplicationsInformation in Engineering Conference
dcterms.source.conference-start-dateAug 28 2011
dcterms.source.conferencelocationWashington DC
dcterms.source.placeNew York
curtin.note

Published in: ASME 2011 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference. Volume 3: 2011 ASME/IEEE International Conference on Mechatronic and Embedded Systems and Applications, Parts A and B. Copyright © 2011 by ASME.

curtin.department
curtin.accessStatusFulltext not available


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record