Recent studies of auditory-visual integration have reached diametrically opposed conclusions as to whether individuals differ in their ability to integrate auditory and visual speech cues. A study by Massaro and Cohen [J. Acoust. Soc. Am. 108(2), 784-789 (2000)] reported that individuals are essentially equivalent in their ability to integrate auditory and visual speech information, whereas a study by Grant and Seitz [J. Acoust. Soc. Am. 104(4), 2438-2450 (1998)] reported substantial variability across subjects in auditory-visual integration for both sentences and nonsense syllables. This letter discusses issues related to the measurement of auditory-visual integration and modeling efforts employed to separate information extraction from information processing.