Do the Eyes have it? Linking Eye Movements to Sentence Comprehension in Reading and Listening

Julie Boland
University of Michigan

Eye tracking paradigms in both written and spoken modalities seem to represent the state of the art for behavioral investigations of language processing. But how well do we understand the linkage between eye movement data and syntactic or semantic processes in sentence comprehension?

Although ocular-motor models of reading have been developed (e.g.,E-Z Reader) and many of the linking assumptions have been directly tested, we still don't fully understand how different dependent measures (e.g., first fixation time, probability of a regression) are linked to specific cognitive events. In some cases, multiple dependent measures provide converging evidence of processing difficulty (e.g., Frazier & Rayner, 1982). In other research, diverging evidence is used to distinguish processes at different levels of analysis (e.g., Boland & Blodgett, 2001, 2002; Ferreira & Henderson, 1990; Ni et al., 1998). For example, Boland and Blodgett (2001) distinguished between lexical effects on syntactic generation (first fixation data) and post-generation discourse congruency effects (downstream regressions). In another line of research, Boland and Blodgett (2002) distinguished between phrase structure construction and feature-checking processes because syntactic category anomalies were reflected in first pass reading times while morpho-syntactic anomalies were not. If they are right, the "diverging evidence" results have clear theoretical implications, but they also require a complex set of linking assumptions that is not fully worked out.

Spoken language eye tracking paradigms take advantage of our natural tendency to look at things as they are mentioned (Cooper, 1974) or as we reach for them (Tanenhaus et al., 1995). These paradigms will have the greatest impact on the theoretical debate in sentence comprehension if we can develop models of how eye movements during listening/acting are linked to syntactic and semantic processes. I will outline a research agenda that begins to address some crucial issues in developing such a model: (i) Should we begin with ocular-motor models of scene perception? (ii) How do the default assumptions differ in directed action vs. passive listening? (iii) What is the cognitive mechanism that generates "anticipatory looks"? (iv) Under what circumstances will the visual environment constrain syntactic and semantic processes?

References

Boland, J.E. & Blodgett, A. (2001). Understanding the constraints on syntactic generation: Lexical bias and discourse congruency effects on eye movements. Journal of Memory and Language, 45, 391-411.

Boland, J.E. & Blodgett, A. (2002). Eye movements as a measure of syntactic and semantic incongruity in unambiguous sentences. Submitted.

Cooper, R. M. (1974). The control of eye fixations by the meaning of spoken language: a new methodology for the real-time investigation of speech perception, memory, and language processing. Cognitive Psychology, 6, 84-107.

Ferreira, F. & Henderson, J. M. (1990). Use of verb information in syntactic parsing: Evidence from eye movements and word-by-word self-paced reading. Journal of Experimental Psychology: Learning, Memory and Cognition, 16, 555-568.

Frazier, L. & Rayner, K. (1982). Making and correcting errors during sentence comprehension: Eye movements in the analysis of structurally ambiguous sentences. Cognitive Psychology, 14, 178-210.

Ni, W., Fodor, J. D., Crain, S., & Shankweiler, D. (1998). Anomaly detection: Eye movement patterns. Journal of Psycholinguistic Research, 27, 515-539.

Tanenhaus, M. K., Spivey-Knowlton, M. J., Eberhard, K. M. & Sedivy, J. C. (1995). Integration of visual and linguistic information in spoken language comprehension. Science, 268, 1632-1634.