TY - JOUR
T1 - Integration efficiency for speech perception within and across sensory modalities by normal-hearing and hearing-impaired individuals
AU - Grant, Ken W.
AU - Tufts, Jennifer B.
AU - Greenberg, Steven
N1 - Funding Information:
This research was supported by the Clinical Investigation Service, Walter Reed Army Medical Center, under Work Unit #00-2501, Grant No. DC 00792 from the National Institute on Deafness and Other Communication Disorders, Grant No. SBR 9720398 from the Learning and Intelligent Systems Initiative of the National Science Foundation to the International Computer Science Institute, and the Oticon Foundation, Copenhagen, Denmark. We would also like to thank Rosario Silipo for assistance in creating the auditory stimuli, Mary Cord for assistance in data collection, and Louis Braida for his help fitting the Prelabeling Model of Integration to our data. The opinions or assertions contained herein are the private views of the authors and are not to be construed as official or as reflecting the views of the Department of the Army or the Department of Defense.
PY - 2007
Y1 - 2007
N2 - In face-to-face speech communication, the listener extracts and integrates information from the acoustic and optic speech signals. Integration occurs within the auditory modality (i.e., across the acoustic frequency spectrum) and across sensory modalities (i.e., across the acoustic and optic signals). The difficulties experienced by some hearing-impaired listeners in understanding speech could be attributed to losses in the extraction of speech information, the integration of speech cues, or both. The present study evaluated the ability of normal-hearing and hearing-impaired listeners to integrate speech information within and across sensory modalities in order to determine the degree to which integration efficiency may be a factor in the performance of hearing-impaired listeners. Auditory-visual nonsense syllables consisting of eighteen medial consonants surrounded by the vowel [a] were processed into four nonoverlapping acoustic filter bands between 300 and 6000 Hz. A variety of one, two, three, and four filter-band combinations were presented for identification in auditory-only and auditory-visual conditions: A visual-only condition was also included. Integration efficiency was evaluated using a model of optimal integration. Results showed that normal-hearing and hearing-impaired listeners integrated information across the auditory and visual sensory modalities with a high degree of efficiency, independent of differences in auditory capabilities. However, across-frequency integration for auditory-only input was less efficient for hearing-impaired listeners. These individuals exhibited particular difficulty extracting information from the highest frequency band (4762- 6000 Hz) when speech information was presented concurrently in the next lower-frequency band (1890- 2381 Hz). Results suggest that integration of speech information within the auditory modality, but not across auditory and visual modalities, affects speech understanding in hearing-impaired listeners.
AB - In face-to-face speech communication, the listener extracts and integrates information from the acoustic and optic speech signals. Integration occurs within the auditory modality (i.e., across the acoustic frequency spectrum) and across sensory modalities (i.e., across the acoustic and optic signals). The difficulties experienced by some hearing-impaired listeners in understanding speech could be attributed to losses in the extraction of speech information, the integration of speech cues, or both. The present study evaluated the ability of normal-hearing and hearing-impaired listeners to integrate speech information within and across sensory modalities in order to determine the degree to which integration efficiency may be a factor in the performance of hearing-impaired listeners. Auditory-visual nonsense syllables consisting of eighteen medial consonants surrounded by the vowel [a] were processed into four nonoverlapping acoustic filter bands between 300 and 6000 Hz. A variety of one, two, three, and four filter-band combinations were presented for identification in auditory-only and auditory-visual conditions: A visual-only condition was also included. Integration efficiency was evaluated using a model of optimal integration. Results showed that normal-hearing and hearing-impaired listeners integrated information across the auditory and visual sensory modalities with a high degree of efficiency, independent of differences in auditory capabilities. However, across-frequency integration for auditory-only input was less efficient for hearing-impaired listeners. These individuals exhibited particular difficulty extracting information from the highest frequency band (4762- 6000 Hz) when speech information was presented concurrently in the next lower-frequency band (1890- 2381 Hz). Results suggest that integration of speech information within the auditory modality, but not across auditory and visual modalities, affects speech understanding in hearing-impaired listeners.
UR - http://www.scopus.com/inward/record.url?scp=33846677360&partnerID=8YFLogxK
U2 - 10.1121/1.2405859
DO - 10.1121/1.2405859
M3 - Article
C2 - 17348537
AN - SCOPUS:33846677360
SN - 0001-4966
VL - 121
SP - 1164
EP - 1176
JO - Journal of the Acoustical Society of America
JF - Journal of the Acoustical Society of America
IS - 2
ER -