Curtin University Homepage
  • Library
  • Help
    • Admin

    espace - Curtin’s institutional repository

    JavaScript is disabled for your browser. Some features of this site may not work without it.
    View Item 
    • espace Home
    • espace
    • Curtin Research Publications
    • View Item
    • espace Home
    • espace
    • Curtin Research Publications
    • View Item

    SkeletonNet: Mining Deep Part Features for 3-D Action Recognition

    Access Status
    Fulltext not available
    Authors
    Ke, Q.
    An, Senjian
    Bennamoun, M.
    Sohel, F.
    Boussaid, F.
    Date
    2017
    Type
    Journal Article
    
    Metadata
    Show full item record
    Citation
    Ke, Q. and An, S. and Bennamoun, M. and Sohel, F. and Boussaid, F. 2017. SkeletonNet: Mining Deep Part Features for 3-D Action Recognition. IEEE Signal Processing Letters. 24 (6): pp. 731-735.
    Source Title
    IEEE Signal Processing Letters
    DOI
    10.1109/LSP.2017.2690339
    ISSN
    1070-9908
    School
    School of Electrical Engineering, Computing and Mathematical Science (EECMS)
    URI
    http://hdl.handle.net/20.500.11937/69968
    Collection
    • Curtin Research Publications
    Abstract

    This letter presents SkeletonNet, a deep learning framework for skeleton-based 3-D action recognition. Given a skeleton sequence, the spatial structure of the skeleton joints in each frame and the temporal information between multiple frames are two important factors for action recognition. We first extract body-part-based features from each frame of the skeleton sequence. Compared to the original coordinates of the skeleton joints, the proposed features are translation, rotation, and scale invariant. To learn robust temporal information, instead of treating the features of all frames as a time series, we transform the features into images and feed them to the proposed deep learning network, which contains two parts: one to extract general features from the input images, while the other to generate a discriminative and compact representation for action recognition. The proposed method is tested on the SBU kinect interaction dataset, the CMU dataset, and the large-scale NTU RGB+D dataset and achieves state-of-the-art performance.

    Related items

    Showing items related by title, author, creator and subject.

    • A new representation of skeleton sequences for 3D action recognition
      Ke, Q.; Bennamoun, M.; An, Senjian; Sohel, F.; Boussaid, F. (2017)
      © 2017 IEEE. This paper presents a new method for 3D action recognition with skeleton sequences (i.e., 3D trajectories of human skeleton joints). The proposed method first transforms each skeleton sequence into three clips ...
    • Learning Clip Representations for Skeleton-Based 3D Action Recognition
      Ke, Q.; Bennamoun, M.; An, Senjian; Sohel, F.; Boussaid, F. (2018)
      This paper presents a new representation of skeleton sequences for 3D action recognition. Existing methods based on hand-crafted features or recurrent neural networks cannot adequately capture the complex spatial structures ...
    • Human animation from analysis and reconstruction of human motion in video sequences
      Zhang, Li (2009)
      This research aims to address one of the most challenging problems in the field of computer vision and computer graphics, that is, the reconstruction of smooth 3D human motions from monocular video containing unrestricted ...
    Advanced search

    Browse

    Communities & CollectionsIssue DateAuthorTitleSubjectDocument TypeThis CollectionIssue DateAuthorTitleSubjectDocument Type

    My Account

    Admin

    Statistics

    Most Popular ItemsStatistics by CountryMost Popular Authors

    Follow Curtin

    • 
    • 
    • 
    • 
    • 

    CRICOS Provider Code: 00301JABN: 99 143 842 569TEQSA: PRV12158

    Copyright | Disclaimer | Privacy statement | Accessibility

    Curtin would like to pay respect to the Aboriginal and Torres Strait Islander members of our community by acknowledging the traditional owners of the land on which the Perth campus is located, the Whadjuk people of the Nyungar Nation; and on our Kalgoorlie campus, the Wongutha people of the North-Eastern Goldfields.