Show simple item record

dc.contributor.authorBest, C.
dc.contributor.authorKroos, Christian
dc.contributor.authorMulak, K.
dc.contributor.authorHalovic, S.
dc.contributor.authorFort, M.
dc.contributor.authorKitamura, C.
dc.contributor.editor-
dc.date.accessioned2017-01-30T15:19:48Z
dc.date.available2017-01-30T15:19:48Z
dc.date.created2016-09-22T12:29:02Z
dc.date.issued2015
dc.date.submitted2016-09-22
dc.identifier.citationBest, C. and Kroos, C. and Mulak, K. and Halovic, S. and Fort, M. and Kitamura, C. 2015. Message vs. messenger effects on cross-modal matching for spoken phrases, in The 1st Joint Conference on Facial Analysis, Animation, and Auditory-Visual Speech Processing (FAAVSP), Sep 11 2015, pp. 28-33. Vienna, Austria: ISCA.
dc.identifier.urihttp://hdl.handle.net/20.500.11937/45277
dc.description.abstract

A core issue in speech perception and word recognition research is the nature of information perceivers use to identify spoken utterances across indexical variations in their phonetic details, such as talker and accent differences. Separately, a crucial question in audio-visual research is the nature of information perceivers use to recognize phonetic congruency between the audio and visual (talking face) signals that arise from speaking. We combined these issues in a study examining how differences between connected speech utterances (messages) versus between talkers and accents (messenger characteristics) contribute to recognition of crossmodal articulatory congruence between audio-only (AO) and video-only (VO) components of spoken utterances. Participants heard AO phrases in their native regional English accent or another English accent, and then saw two synchronous VO displays of point-light talking faces from which they had to select the one that corresponded to the audio target. The incorrect video in each pair was either the same or a different phrase as the audio target, produced by the same or a different talker, who spoke in either the same or a different English accent. Results indicate that cross-modal articulatory correspondence is more accurately and quickly detected for message content than for messenger details, suggesting that recognising the linguistic message is more fundamental than messenger features is to cross-modal detection of audio-visual articulatory congruency. Nonetheless, messenger characteristics, especially accent, affected performance to some degree, analogous to recent findings in AO speech research.

dc.publisherISCA
dc.subjectcross-modal congruency
dc.subjectarticulatory information
dc.subjectpoint-light talkers
dc.subjecttalker and accent effects
dc.titleMessage vs. messenger effects on cross-modal matching for spoken phrases
dc.typeConference Paper
dcterms.dateSubmitted2016-09-22
dcterms.source.startPage28
dcterms.source.endPage33
dcterms.source.titleFAAVSP-2015
dcterms.source.seriesFAAVSP-2015
dcterms.source.conferenceFAAVSP - The 1st Joint Conference on Facial Analysis, Animation, and Auditory-Visual Speech Processing
dcterms.source.conferencedatesSep 11 2015
dcterms.source.conferencelocationVienna, Austria
dcterms.source.placeVienna, Austria
curtin.digitool.pid245564
curtin.pubStatusPublished
curtin.refereedTRUE
curtin.departmentSchool of Design and Art
curtin.identifier.scriptidPUB-HUM-SDA-CK-27474
curtin.accessStatusFulltext not available


Files in this item

FilesSizeFormatView

There are no files associated with this item.

This item appears in the following Collection(s)

Show simple item record