Message vs. messenger effects on cross-modal matching for spoken phrases
Access Status
Authors
Date
2015Type
Metadata
Show full item recordCitation
Source Title
Source Conference
School
Collection
Abstract
A core issue in speech perception and word recognition research is the nature of information perceivers use to identify spoken utterances across indexical variations in their phonetic details, such as talker and accent differences. Separately, a crucial question in audio-visual research is the nature of information perceivers use to recognize phonetic congruency between the audio and visual (talking face) signals that arise from speaking. We combined these issues in a study examining how differences between connected speech utterances (messages) versus between talkers and accents (messenger characteristics) contribute to recognition of crossmodal articulatory congruence between audio-only (AO) and video-only (VO) components of spoken utterances. Participants heard AO phrases in their native regional English accent or another English accent, and then saw two synchronous VO displays of point-light talking faces from which they had to select the one that corresponded to the audio target. The incorrect video in each pair was either the same or a different phrase as the audio target, produced by the same or a different talker, who spoke in either the same or a different English accent. Results indicate that cross-modal articulatory correspondence is more accurately and quickly detected for message content than for messenger details, suggesting that recognising the linguistic message is more fundamental than messenger features is to cross-modal detection of audio-visual articulatory congruency. Nonetheless, messenger characteristics, especially accent, affected performance to some degree, analogous to recent findings in AO speech research.
Related items
Showing items related by title, author, creator and subject.
-
Best, C.; Kroos, Christian; Irwin, J. (2010)We examined infants’ sensitivity to articulatory organ congruency between audio-only and silent-video consonants (lip vs. tongue tip closure) to evaluate three theoretical accounts of audio-visual perceptual development ...
-
Kim, J.; Kroos, Christian; Davis, C. (2010)Parsing of information from the world into objects and events occurs in both the visual and auditory modalities. It has been suggested that visual and auditory scene perceptions involve similar principles of perceptual ...
-
Best, C.; Kroos, Christian; Irwin, J. (2011)In a prior study infants habituated to an audio-only labial or alveolar, native English voiceless or non-native ejective stop, then saw silent videos of stops at each place [1]. 4-month-olds gazed more at congruent videos ...