Show simple item record

dc.contributor.authorKe, Q.
dc.contributor.authorAn, Senjian
dc.contributor.authorBennamoun, M.
dc.contributor.authorSohel, F.
dc.contributor.authorBoussaid, F.
dc.identifier.citationKe, Q. and An, S. and Bennamoun, M. and Sohel, F. and Boussaid, F. 2017. SkeletonNet: Mining Deep Part Features for 3-D Action Recognition. IEEE Signal Processing Letters. 24 (6): pp. 731-735.

This letter presents SkeletonNet, a deep learning framework for skeleton-based 3-D action recognition. Given a skeleton sequence, the spatial structure of the skeleton joints in each frame and the temporal information between multiple frames are two important factors for action recognition. We first extract body-part-based features from each frame of the skeleton sequence. Compared to the original coordinates of the skeleton joints, the proposed features are translation, rotation, and scale invariant. To learn robust temporal information, instead of treating the features of all frames as a time series, we transform the features into images and feed them to the proposed deep learning network, which contains two parts: one to extract general features from the input images, while the other to generate a discriminative and compact representation for action recognition. The proposed method is tested on the SBU kinect interaction dataset, the CMU dataset, and the large-scale NTU RGB+D dataset and achieves state-of-the-art performance.

dc.publisherInstitute of Electrical and Electronics Engineers
dc.titleSkeletonNet: Mining Deep Part Features for 3-D Action Recognition
dc.typeJournal Article
dcterms.source.titleIEEE Signal Processing Letters
curtin.departmentSchool of Electrical Engineering, Computing and Mathematical Science (EECMS)
curtin.accessStatusFulltext not available

Files in this item


There are no files associated with this item.

This item appears in the following Collection(s)

Show simple item record