Audio and visual cues in a two-talker divided attention speech-monitoring task

Douglas S. Brungart*, Alexander J. Kordik, Brian D. Simpson

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

13 Scopus citations

Abstract

Although audiovisual (AV) cues are known to improve speech intelligibility in difficult listening environments, little is known about their role in divided attention tasks that require listeners to monitor multiple talkers at the same time. In this experiment, a call-sign-based multitalker listening test was used to evaluate performance in two-talker AV configurations that combined zero, one, or two channels of visual information (neither, one, or both talkers visible) with zero, one, or two channels of audio information (no audio, both talkers played from the same loudspeaker, and both talkers played through different, spatially separated loudspeakers). The results were analyzed to determine the relative performance levels that would occur with each AV configuration with target information that was equally likely to originate from either of the two talkers in the stimulus. The results indicate that spatial separation of the audio signals has the greatest impact on performance in multichannel AV speech displays and that caution should be used when presenting a visual representation of only a single talker unless that talker is known to be the highest priority talker in the combined AV stimulus. Potential applications of this research include the design of improved audiovisual speech displays for multichannel communications systems.

Original languageEnglish
Pages (from-to)562-573
Number of pages12
JournalHuman Factors
Volume47
Issue number3
DOIs
StatePublished - Sep 2005
Externally publishedYes

Fingerprint

Dive into the research topics of 'Audio and visual cues in a two-talker divided attention speech-monitoring task'. Together they form a unique fingerprint.

Cite this