Curtin University Homepage
  • Library
  • Help
    • Admin

    espace - Curtin’s institutional repository

    JavaScript is disabled for your browser. Some features of this site may not work without it.
    View Item 
    • espace Home
    • espace
    • Curtin Research Publications
    • View Item
    • espace Home
    • espace
    • Curtin Research Publications
    • View Item

    A new representation of skeleton sequences for 3D action recognition

    Access Status
    Fulltext not available
    Authors
    Ke, Q.
    Bennamoun, M.
    An, Senjian
    Sohel, F.
    Boussaid, F.
    Date
    2017
    Type
    Conference Paper
    
    Metadata
    Show full item record
    Citation
    Ke, Q. and Bennamoun, M. and An, S. and Sohel, F. and Boussaid, F. 2017. A new representation of skeleton sequences for 3D action recognition, 30th IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2017, pp. 4570-4579.
    Source Title
    Proceedings - 30th IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2017
    Source Conference
    30th IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2017
    DOI
    10.1109/CVPR.2017.486
    ISBN
    9781538604571
    School
    School of Electrical Engineering, Computing and Mathematical Science (EECMS)
    URI
    http://hdl.handle.net/20.500.11937/70274
    Collection
    • Curtin Research Publications
    Abstract

    © 2017 IEEE. This paper presents a new method for 3D action recognition with skeleton sequences (i.e., 3D trajectories of human skeleton joints). The proposed method first transforms each skeleton sequence into three clips each consisting of several frames for spatial temporal feature learning using deep neural networks. Each clip is generated from one channel of the cylindrical coordinates of the skeleton sequence. Each frame of the generated clips represents the temporal information of the entire skeleton sequence, and incorporates one particular spatial relationship between the joints. The entire clips include multiple frames with different spatial relationships, which provide useful spatial structural information of the human skeleton. We propose to use deep convolutional neural networks to learn long-term temporal information of the skeleton sequence from the frames of the generated clips, and then use a Multi-Task Learning Network (MTLN) to jointly process all frames of the generated clips in parallel to incorporate spatial structural information for action recognition. Experimental results clearly show the effectiveness of the proposed new representation and feature learning method for 3D action recognition.

    Related items

    Showing items related by title, author, creator and subject.

    • Learning Clip Representations for Skeleton-Based 3D Action Recognition
      Ke, Q.; Bennamoun, M.; An, Senjian; Sohel, F.; Boussaid, F. (2018)
      This paper presents a new representation of skeleton sequences for 3D action recognition. Existing methods based on hand-crafted features or recurrent neural networks cannot adequately capture the complex spatial structures ...
    • SkeletonNet: Mining Deep Part Features for 3-D Action Recognition
      Ke, Q.; An, Senjian; Bennamoun, M.; Sohel, F.; Boussaid, F. (2017)
      This letter presents SkeletonNet, a deep learning framework for skeleton-based 3-D action recognition. Given a skeleton sequence, the spatial structure of the skeleton joints in each frame and the temporal information ...
    • Human animation from analysis and reconstruction of human motion in video sequences
      Zhang, Li (2009)
      This research aims to address one of the most challenging problems in the field of computer vision and computer graphics, that is, the reconstruction of smooth 3D human motions from monocular video containing unrestricted ...
    Advanced search

    Browse

    Communities & CollectionsIssue DateAuthorTitleSubjectDocument TypeThis CollectionIssue DateAuthorTitleSubjectDocument Type

    My Account

    Admin

    Statistics

    Most Popular ItemsStatistics by CountryMost Popular Authors

    Follow Curtin

    • 
    • 
    • 
    • 
    • 

    CRICOS Provider Code: 00301JABN: 99 143 842 569TEQSA: PRV12158

    Copyright | Disclaimer | Privacy statement | Accessibility

    Curtin would like to pay respect to the Aboriginal and Torres Strait Islander members of our community by acknowledging the traditional owners of the land on which the Perth campus is located, the Whadjuk people of the Nyungar Nation; and on our Kalgoorlie campus, the Wongutha people of the North-Eastern Goldfields.