Dynamic Hybrid Learning for Improving Facial Expression Classifier Reliability
dc.contributor.author | Vice, Jordan | |
dc.contributor.author | Khan, Masood | |
dc.contributor.author | Tan, Tele | |
dc.contributor.author | Yanushkevich, Svetlana | |
dc.contributor.editor | Papadopoulos, George | |
dc.contributor.editor | Angelov, Plamen | |
dc.date.accessioned | 2022-06-08T07:36:56Z | |
dc.date.available | 2022-06-08T07:36:56Z | |
dc.date.issued | 2022 | |
dc.identifier.citation | Vice, J.and Khan, M. and Tan, T. and Yanushkevich, S. 2022. Dynamic Hybrid Learning for Improving Facial Expression Classifier Reliability. In: 2022 IEEE International Conference on Evolving and Adaptive Intelligent Systems, 25th May 2022, Larnaca, Cyprus. | |
dc.identifier.uri | http://hdl.handle.net/20.500.11937/88713 | |
dc.identifier.doi | 10.1109/EAIS51927.2022.9787730 | |
dc.description.abstract |
Independent, discrete models like Paul Ekman’s six basic emotions model are widely used in affective state assessment (ASA) and facial expression classification. However, the continuous and dynamic nature of human expressions often needs to be considered for accurately assessing facial expressions of affective states. This paper investigates how mutual information-carrying continuous models can be extracted and used in continuous and dynamic facial expression classification systems for improving the efficacy and reliability of ASA systems. A novel, hybrid learning model that projects continuous data onto a multidimensional hyperplane is proposed. Through cosine similarity-based clustering (unsupervised) and classification (supervised) processes, our hybrid approach allows us to transform seven, discrete facial expression models into twenty-one facial expression models that include micro-expressions. The proposed continuous, dynamic classifier was able to achieve greater than 73% accuracy when experimented with Random Forest, Support Vector Machine (SVM) and Neural Network classification architectures. The presented system was validated using the Ryerson Audio-Visual Database of Emotional Speech and Song (RAVDESS) and the extended Cohn-Kanade (CK+) dataset. | |
dc.language | English | |
dc.publisher | ieee.org | |
dc.subject | 4601 - Applied computing | |
dc.subject | 4602 - Artificial intelligence | |
dc.subject | 4603 - Computer vision and multimedia computation | |
dc.subject | 4611 - Machine learning | |
dc.subject | 0915 - Interdisciplinary Engineering | |
dc.title | Dynamic Hybrid Learning for Improving Facial Expression Classifier Reliability | |
dc.type | Conference Paper | |
dcterms.source.volume | 1 | |
dcterms.source.number | 1 | |
dcterms.source.title | Proceedings of the 2022 IEEE International Conference on Evolving and Adaptive Intelligent Systems | |
dcterms.source.isbn | 978-1-6654-3706-6 | |
dcterms.source.conference | 2022 IEEE International Conference on Evolving and Adaptive Intelligent Systems | |
dcterms.source.conference-start-date | 25 May 2022 | |
dcterms.source.conferencelocation | Larnaca, Cyprus | |
dcterms.source.place | New Jersey USA | |
dc.date.updated | 2022-06-08T07:36:55Z | |
curtin.department | School of Civil and Mechanical Engineering | |
curtin.accessStatus | Fulltext not available | |
curtin.faculty | Faculty of Science and Engineering | |
curtin.contributor.orcid | Khan, Masood [0000-0002-2769-2380] | |
dcterms.source.conference-end-date | 27 May 2022 | |
curtin.contributor.scopusauthorid | Khan, Masood [7410317782] |