Development of a test battery for evaluating speech perception in complex listening environments

Douglas S. Brungart*, Benjamin M. Sheffield, Lina R. Kubli

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

35 Scopus citations


In the real world, spoken communication occurs in complex environments that involve audiovisual speech cues, spatially separated sound sources, reverberant listening spaces, and other complicating factors that influence speech understanding. However, most clinical tools for assessing speech perception are based on simplified listening environments that do not reflect the complexities of real-world listening. In this study, speech materials from the QuickSIN speech-in-noise test by Killion, Niquette, Gudmundsen, Revit, and Banerjee [J. Acoust. Soc. Am. 116, 2395-2405 (2004)] were modified to simulate eight listening conditions spanning the range of auditory environments listeners encounter in everyday life. The standard QuickSIN test method was used to estimate 50% speech reception thresholds (SRT50) in each condition. A method of adjustment procedure was also used to obtain subjective estimates of the lowest signal-to-noise ratio (SNR) where the listeners were able to understand 100% of the speech (SRT100) and the highest SNR where they could detect the speech but could not understand any of the words (SRT 0). The results show that the modified materials maintained most of the efficiency of the QuickSIN test procedure while capturing performance differences across listening conditions comparable to those reported in previous studies that have examined the effects of audiovisual cues, binaural cues, room reverberation, and time compression on the intelligibility of speech.

Original languageEnglish
Pages (from-to)777-790
Number of pages14
JournalJournal of the Acoustical Society of America
Issue number2
StatePublished - Aug 2014
Externally publishedYes


Dive into the research topics of 'Development of a test battery for evaluating speech perception in complex listening environments'. Together they form a unique fingerprint.

Cite this