Fully automatic 3D facial expression recognition using local depth features
dc.contributor.author | Xue, Mingliang | |
dc.contributor.author | Mian, A. | |
dc.contributor.author | Liu, Wan-Quan | |
dc.contributor.author | Li, Ling | |
dc.contributor.editor | NOT FOUND | |
dc.date.accessioned | 2017-01-30T15:21:49Z | |
dc.date.available | 2017-01-30T15:21:49Z | |
dc.date.created | 2015-05-22T08:32:23Z | |
dc.date.issued | 2014 | |
dc.identifier.citation | Xue, M. and Mian, A. and Liu, W. and Li, L. 2014. Fully automatic 3D facial expression recognition using local depth features, in IEEE Winter Conference on Applications of Computer Vision, Mar 24-26 2014, pp. 1096-1103. Steamboat Springs, CO, USA: Institute of Electrical and Electronics Engineers. | |
dc.identifier.uri | http://hdl.handle.net/20.500.11937/45568 | |
dc.identifier.doi | 10.1109/WACV.2014.6835736 | |
dc.description.abstract |
Facial expressions form a significant part of our nonverbal communications and understanding them is essential for effective human computer interaction. Due to the diversity of facial geometry and expressions, automatic expression recognition is a challenging task. This paper deals with the problem of person-independent facial expression recognition from a single 3D scan. We consider only the 3D shape because facial expressions are mostly encoded in facial geometry deformations rather than textures. Unlike the majority of existing works, our method is fully automatic including the detection of landmarks. We detect the four eye corners and nose tip in real time on the depth image and its gradients using Haar-like features and AdaBoost classifier. From these five points, another 25 heuristic points are defined to extract local depth features for representing facial expressions. The depth features are projected to a lower dimensional linear subspace where feature selection is performed by maximizing their relevance and minimizing their redundancy. The selected features are then used to train a multi-class SVM for the final classification. Experiments on the benchmark BU-3DFE database show that the proposed method outperforms existing automatic techniques, and is comparable even to the approaches using manual landmarks. | |
dc.publisher | Institute of Electrical and Electronics Engineers | |
dc.subject | Three-dimensional displays | |
dc.subject | feature selection | |
dc.subject | learning (artificial intelligence) | |
dc.subject | feature extraction | |
dc.subject | Face recognition | |
dc.subject | Vectors | |
dc.subject | human computer interaction | |
dc.subject | Feature extraction | |
dc.subject | support vector machines | |
dc.subject | face recognition | |
dc.subject | Nose | |
dc.subject | Haar transforms | |
dc.subject | Mouth | |
dc.subject | image classification | |
dc.title | Fully automatic 3D facial expression recognition using local depth features | |
dc.type | Conference Paper | |
dcterms.source.startPage | 1096 | |
dcterms.source.endPage | 1103 | |
dcterms.source.title | 2014 IEEE Winter Conference on Applications of Computer Vision (WACV), | |
dcterms.source.series | 2014 IEEE Winter Conference on Applications of Computer Vision (WACV), | |
dcterms.source.conference | WACV 2014: IEEE Winter Conference on Applications of Computer Vision | |
dcterms.source.conference-start-date | Mar 24 2014 | |
dcterms.source.conferencelocation | Steamboat Springs, CO, USA | |
dcterms.source.place | 445 Hoes Ln, Piscataway, NJ 08855 United States | |
curtin.department | Department of Computing | |
curtin.accessStatus | Fulltext not available |