Calibration of Audio-Video Sensors for Multi-Modal Event Indexing
MetadataShow full item record
This paper addresses the coordinated use of video and audio cues to capture and index surveillance events with multimodal labels. The focus of this paper is the development of a joint-sensor calibration technique that uses audio-visual observations to improve the calibration process. One significant feature of this approach is the ability to continuously check and update the calibration status of the sensor suite, making it resilient to independent drift in the individual sensors. We present scenarios in which this system is used to enhance surveillance.
Showing items related by title, author, creator and subject.
Kühnapfel, Thorsten (2009)For humans, hearing is the second most important sense, after sight. Therefore, acoustic information greatly contributes to observing and analysing an area of interest. For this reason combining audio and video cues for ...
Chan, T.; Lichti, D.; Belton, David (2013)At the end of the first quarter of 2012, more than 600 Velodyne LiDAR systems had been sold worldwide for various robotic and high-accuracy survey applications. The ultra-compact Velodyne HDL-32E LiDAR has become a ...
McAtee, Brendon Kynnie (2003)Remote sensing of land surface temperature (LST) is a complex task. From a satellite-based perspective the radiative properties of the land surface and the atmosphere are inextricably linked. Knowledge of both is required ...