Search

Showing posts with label eye-movement control. Show all posts
Showing posts with label eye-movement control. Show all posts

Saturday, March 11, 2023

New paper: SEAM: An Integrated Activation-Coupled Model of Sentence Processing and Eye Movements in Reading.

Michael Betancourt, a giant in the field of Bayesian statistical modeling, once indirectly pointed out to me (in a podcast interview) that one should not try to model latent cognitive processes in reading by computing summary statistics like the mean difference between conditions and then fitting the model on those summary statistics. But that is exactly what we do in psycholinguistics. Most models (including those from my lab) evaluate model performance on summary statistics from the data (usually, a mean difference), abstracting away quite dramatically from the complex processes that resulted in those reading times and regressive eye movements. 

What Michael wanted instead was a detailed process model of how the observed fixations and eye movement patterns arise. Obviously, such a model would be extremely complicated, because one would have to specify the full details of oculomotor processes and their impact on eye movements, as well as a model of language comprehension,  and specify how these components interact to produce eye movements at the single trial level. This kind of model will quickly become computationally intractable if one tries to estimate the model parameters using data. So that's a major barrier to building such a model.

Interestingly, both eye movement control models and models of sentence comprehension exist. But these live in parallel universes. Psychologists have almost always focused on eye movement control, ignoring the impact of sentence comprehension processes (I once heard a talk by a psychologist who publicly called out psycholinguists, labeling them as "crazy" for studying language processing in reading :). Similarly, most psycholinguists just ignore the lower-level processes unfolding in reading, and just assume that language processing events are responsible for differences in fixation durations or in left-ward eye movements (regressions). The most that psycholinguists like me are willing to do is add word frequency etc. as a co-predictor to reading time or other dependent measures when investigating reading. But in most cases even that would go too far :).

What is missing is a model that brings these two lines of work into one integrated reading model that co-determines where we move our eyes to and for how long.

 Max Rabe, who is wrapping up his PhD work in psychology at Potsdam in Germany, demonstrates how this could be done: he takes a fully specified model of eye movement control in reading (SWIFT) and integrates into it linguistic dependency completion processes, following the principles of the cognitive architecture ACT-R. A key achievement is that the activation of a word being read is co-determined by both oculomotor processes as specified in SWIFT, and cue-based retrieval processes as specified in the activation-based model of retrieval.  A key achievement is to show how regressive eye movements are triggered when sentence processing difficulty (here, similarity-based interference) arises during reading.

What made the model fitting possible was Bayesian parameter estimation: Max Rabe shows in an earlier (2021) Psychological Review paper (preprint here) how parameter estimation can be carried out in complex models where the likelihood function may not be easy to work out.

 Download the paper from arXiv.




Sunday, August 08, 2021

Podcast interview with me in "Betancourting disaster"

Michael Betancourt is a major force in applied Bayesian statistics. Over the years, he has written a huge number of case studies and tutorials relating to practical aspects of Bayesian modeling using Stan. He has also lectured at our summer school on statistics, which is held annually at Potsdam. He also has a large collection of publicly available talks that are worth watching. 

We have collaborated with Michael to produce two really important papers for cognitive scientists:

1. Daniel J. Schad, Michael Betancourt, and Shravan Vasishth. Toward a principled Bayesian workflow: A tutorial for cognitive sciencePsychological Methods, 2020. Download here: https://arxiv.org/abs/1904.12765.

2. Daniel J. Schad, Bruno Nicenboim, Paul-Christian Bürkner, Michael Betancourt, and Shravan Vasishth. Workflow Techniques for the Robust Use of Bayes Factors. Available from arXiv:2103.08744v2, 2021. Download here: https://arxiv.org/abs/2103.08744.

He has a podcast, called Betancourting disaster. Michael recently interviewed me, and we talked about the challenges associated with modeling cognitive processes (e.g., reading processes and their interaction with sentence comprehension). You can listen to the whole thing here (it's about an hour-long conversation):

https://www.patreon.com/posts/50550798

Friday, May 14, 2021

New Psych Review paper by Max Rabe et al: A Bayesian approach to dynamical modeling of eye-movement control in reading of normal, mirrored, and scrambled texts

 An important new paper by Max Rabe, a PhD student in the psychology department at Potsdam:

Open access pdf download: https://psyarxiv.com/nw2pb/

Reproducible code and data: https://osf.io/t9sbf/ 

Title: A Bayesian approach to dynamical modeling of eye-movement control in reading of normal, mirrored, and scrambled texts

Abstract: In eye-movement control during reading, advanced process-oriented models have been developed to reproduce behavioral data. So far, model complexity and large numbers of model parameters prevented rigorous statistical inference and modeling of interindividual differences. Here we propose a Bayesian approach to both problems for one representative computational model of sentence reading (SWIFT; Engbert et al., Psychological Review, 112, 2005, pp. 777–813). We used experimental data from 36 subjects who read the text in a normal and one of four manipulated text layouts (e.g., mirrored and scrambled letters). The SWIFT model was fitted to subjects and experimental conditions individually to investigate between-subject variability. Based on posterior distributions of model parameters, fixation probabilities and durations are reliably recovered from simulated data and reproduced for withheld empirical data, at both the experimental condition and subject levels. A subsequent statistical analysis of model parameters across reading conditions generates model-driven explanations for observable effects between conditions.