Multimodal models for contextual affect assessment in real-time
Access Status
Date
2019Type
Metadata
Show full item recordCitation
Source Title
ISBN
Faculty
School
Collection
Abstract
Most affect classification schemes rely on near accurate single-cue models resulting in less than required accuracy under certain peculiar conditions. We investigate how the holism of a multimodal solution could be exploited for affect classification. This paper presents the design and implementation of a prototype, stand-alone, real-time multimodal affective state classification system. The presented system utilizes speech and facial muscle movements to create a holistic classifier. The system combines a facial expression classifier and a speech classifier that analyses speech through paralanguage and propositional content. The proposed classification scheme includes a Support Vector Machine (SVM) - paralanguage; a K-Nearest Neighbor (KNN) - propositional content and an InceptionV3 neural network - facial expressions of affective states. The SVM and Inception models boasted respective validation accuracies of 99.2% and 92.78%.
Related items
Showing items related by title, author, creator and subject.
-
Williams, Richard Malcolm (2006)Suppliers of wheat must ensure that their products have the required quality profile demanded by customers and consistently deliver that quality in order to be competitive. Australia’s wheat industry is highly exposed to ...
-
Schäfer, Axel (2009)Background summary. Leg pain is a common complaint in relation to low back pain (LBP), present in up to 65% of all patients with LBP. Radiating leg pain is an important predictor for chronicity of LBP and an indicator of ...
-
Khan, Masood Mehmood; Ward, R. D.; Ingleby, M. (2009)Earlier researchers were able to extract the transient facial thermal features from thermal infrared images (TIRIs) to make binary distinctions between the expressions of affective states. However, effective human-computer ...