Show simple item record

dc.contributor.authorVice, Jordan
dc.contributor.authorKhan, Masood
dc.contributor.authorTan, Tele
dc.contributor.authorMurray, Iain
dc.contributor.authorYanushkevich, Svetlana
dc.date.accessioned2023-12-12T10:31:05Z
dc.date.available2023-12-12T10:31:05Z
dc.date.issued2023
dc.identifier.citationVice, J. and Khan, M. and Tan, T. and Murray, I. and Yanushkevich, S. 2023. A Hierarchical Separation and Classification Network for Dynamic Micro-Expression Classification. IEEE Transactions on Computational Social Systems.
dc.identifier.urihttp://hdl.handle.net/20.500.11937/93940
dc.identifier.doi10.1109/TCSS.2023.3334823
dc.description.abstract

Models of seven discrete expressions developed using macro-level facial muscle variations would suffice identifying macro-level expressions of affective states. These models won’t discretise continuous and dynamic micro-level variations in facial expressions. We present a Hierarchical Separation and Classification Network (HSCN) for discovering dynamic, continuous and macro- and micro-level variations in facial expressions of affective states. In the HSCN, we first invoke an unsupervised cosine similarity-based separation method on continuous facial expression data to extract twenty-one dynamic facial expression classes from the seven common discrete affective states. The between-clusters separation is then optimised for discovering the macro-level changes resulting from facial muscle activations. A following step in the HSCN separates the upper and lower facial regions for realizing changes pertaining to upper and lower facial muscle activations. Data from the two separated facial regions are then clustered in a linear discriminant space using similarities in muscular activation patterns. Next, the actual dynamic expression data are mapped onto discriminant features for developing a rule-based expert system that facilitates classifying twenty-one upper and twenty-one lower micro-expressions. Invoking the random forest algorithm would classify twenty-one macro-level facial expressions with 76.11\% accuracy. A support vector machine (SVM), used separately on upper and lower facial regions in tandem, could classify them with respective accuracies of 73.63\% and 87.68\%. This work demonstrates a novel and effective method of dynamic assessment of affective states. The HSCN further demonstrates that facial muscle variations gathered from either upper-, lower- or full-face would suffice classifying affective states. We also provide new insight into discovery of micro-level facial muscle variations and their utilization in dynamic assessment of facial expressions of affective states.

dc.relation.sponsoredbyCurtin University
dc.relation.sponsoredbyUniversity of Calgary (partial)
dc.rights.urihttp://creativecommons.org/licenses/by/4.0/
dc.subjectCosine similarity-based separation, Hierarchical classification, Micro-expression detection, Affective state assessment, Facial expression classification, Rule-based systems
dc.titleA Hierarchical Separation and Classification Network for Dynamic Micro-Expression Classification
dc.typeJournal Article
dcterms.source.titleIEEE Transactions on Computational Social Systems
dc.date.updated2023-12-12T10:31:03Z
curtin.departmentSchool of Civil and Mechanical Engineering
curtin.accessStatusOpen access
curtin.facultyFaculty of Science and Engineering
curtin.contributor.orcidKhan, Masood [0000-0002-2769-2380]
curtin.contributor.orcidMurray, Iain [0000-0003-1840-9624]
curtin.contributor.orcidTan, Tele [0000-0003-3195-3480]
curtin.contributor.scopusauthoridKhan, Masood [7410317782]
curtin.repositoryagreementV3


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record

http://creativecommons.org/licenses/by/4.0/
Except where otherwise noted, this item's license is described as http://creativecommons.org/licenses/by/4.0/