Show simple item record

dc.contributor.authorArandjelovic, O.
dc.contributor.authorPham, DucSon
dc.contributor.authorVenkatesh, S.
dc.date.accessioned2017-03-15T22:17:02Z
dc.date.available2017-03-15T22:17:02Z
dc.date.created2017-02-26T19:31:36Z
dc.date.issued2016
dc.identifier.citationArandjelovic, O. and Pham, D. and Venkatesh, S. 2016. CCTV Scene Perspective Distortion Estimation From Low-Level Motion Features. IEEE Transactions on Circuits and Systems for Video Technology. 26 (5): pp. 939-949.
dc.identifier.urihttp://hdl.handle.net/20.500.11937/49982
dc.identifier.doi10.1109/TCSVT.2015.2424055
dc.description.abstract

Our aim is to estimate the perspective-effected geometric distortion of a scene from a video feed. In contrast to most related previous work, in this task we are constrained to use low-level spatiotemporally local motion features only. This particular challenge arises in many semiautomatic surveillance systems that alert a human operator to potential abnormalities in the scene. Low-level spatiotemporally local motion features are sparse (and thus require comparatively little storage space) and sufficiently powerful in the context of video abnormality detection to reduce the need for human intervention by more than 100-fold. This paper introduces three significant contributions. First, we describe a dense algorithm for perspective estimation, which uses motion features to estimate the perspective distortion at each image locus and then polls all such local estimates to arrive at the globally best estimate. Second, we also present an alternative coarse algorithm that subdivides the image frame into blocks and uses motion features to derive block-specific motion characteristics and constrain the relationships between these characteristics, with the perspective estimate emerging as a result of a global optimization scheme. Third, we report the results of an evaluation using nine large sets acquired using existing closed-circuit television cameras, not installed specifically for the purposes of this paper. Our findings demonstrate that both proposed methods are successful, their accuracy matching that of human labeling using complete visual data (by the constraints of the setup unavailable to our algorithms).

dc.publisherI E E E Press
dc.titleCCTV Scene Perspective Distortion Estimation From Low-Level Motion Features
dc.typeJournal Article
dcterms.source.volume26
dcterms.source.number5
dcterms.source.startPage939
dcterms.source.endPage949
dcterms.source.issn1051-8215
dcterms.source.titleIEEE Transactions on Circuits and Systems for Video Technology
curtin.departmentDepartment of Computing
curtin.accessStatusFulltext not available


Files in this item

FilesSizeFormatView

There are no files associated with this item.

This item appears in the following Collection(s)

Show simple item record