Search

Thursday, December 17, 2020

New paper: The effect of decay and lexical uncertainty on processing long-distance dependencies in reading

The effect of decay and lexical uncertainty on processing long-distance dependencies in reading

Kate Stone, Titus von der Malsburg, Shravan Vasishth

Download here: https://peerj.com/articles/10438/

Abstract:

 To make sense of a sentence, a reader must keep track of dependent relationships between words, such as between a verb and its particle (e.g. turn the music down). In languages such as German, verb-particle dependencies often span long distances, with the particle only appearing at the end of the clause. This means that it may be necessary to process a large amount of intervening sentence material before the full verb of the sentence is known. To facilitate processing, previous studies have shown that readers can preactivate the lexical information of neighbouring upcoming words, but less is known about whether such preactivation can be sustained over longer distances. We asked the question, do readers preactivate lexical information about long-distance verb particles? In one self-paced reading and one eye tracking experiment, we delayed the appearance of an obligatory verb particle that varied only in the predictability of its lexical identity. We additionally manipulated the length of the delay in order to test two contrasting accounts of dependency processing: that increased distance between dependent elements may sharpen expectation of the distant word and facilitate its processing (an antilocality effect), or that it may slow processing via temporal activation decay (a locality effect). We isolated decay by delaying the particle with a neutral noun modifier containing no information about the identity of the upcoming particle, and no known sources of interference or working memory load. Under the assumption that readers would preactivate the lexical representations of plausible verb particles, we hypothesised that a smaller number of plausible particles would lead to stronger preactivation of each particle, and thus higher predictability of the target. This in turn should have made predictable target particles more resistant to the effects of decay than less predictable target particles. The eye tracking experiment provided evidence that higher predictability did facilitate reading times, but found evidence against any effect of decay or its interaction with predictability. The self-paced reading study provided evidence against any effect of predictability or temporal decay, or their interaction. In sum, we provide evidence from eye movements that readers preactivate long-distance lexical content and that adding neutral sentence information does not induce detectable decay of this activation. The findings are consistent with accounts suggesting that delaying dependency resolution may only affect processing if the intervening information either confirms expectations or adds to working memory load, and that temporal activation decay alone may not be a major predictor of processing time.

Saturday, December 12, 2020

New paper: A Principled Approach to Feature Selection in Models of Sentence Processing

 A Principled Approach to Feature Selection in Models of Sentence Processing

Garrett Smith and Shravan Vasishth

Paper downloadable from: https://onlinelibrary.wiley.com/doi/epdf/10.1111/cogs.12918

Abstract

Among theories of human language comprehension, cue-based memory retrieval has proven to be a useful framework for understanding when and how processing difficulty arises in the resolution of long-distance dependencies. Most previous work in this area has assumed that very general retrieval cues like [+subject] or [+singular] do the work of identifying (and sometimes misidentifying) a retrieval target in order to establish a dependency between words. However, recent work suggests that general, handpicked retrieval cues like these may not be enough to explain illusions of plausibility (Cunnings & Sturt, 2018), which can arise in sentences like The letter next to the porcelain plate shattered. Capturing such retrieval interference effects requires lexically specific features and retrieval cues, but handpicking the features is hard to do in a principled way and greatly increases modeler degrees of freedom. To remedy this, we use well-established word embedding methods for creating distributed lexical feature representations that encode information relevant for retrieval using distributed retrieval cue vectors. We show that the similarity between the feature and cue vectors (a measure of plausibility) predicts total reading times in Cunnings and Sturt’s eye-tracking data. The features can easily be plugged into existing parsing models (including cue-based retrieval and self-organized parsing), putting very different models on more equal footing and facilitating future quantitative comparisons.

Tuesday, November 24, 2020

How to become a professor in Germany---a unique tutorial

How to become a Professor in Germany (Online-Seminar)

For sign-up details, see here: https://www.dhvseminare.de/index.php?module=010700&event=187&catalog_id=3&category_id=15&language_id=

Live-Online-Seminar

This seminar addresses young scientists who are considering a career as a professor at a German university or are already in the middle of an application process for a professorship in Germany. It will give the participants an overview of the career paths to a professorship, covering the legal requirements, the appointment procedure and the legal status of a professor. 

The seminar also addresses how to approach the search for relevant job advertisements, how to prepare the written application documents and how to make a good impression during the further steps in the selection process.
In the second part of the seminar, the participants will receive an overview of the next steps for the successful candidates. This includes the appointment negotiations with German universities, the legal framework and the strategic preparation for those negotiations.

 
Speaker:

RA Dr. Vanessa Adam, Justitiarin für Hochschul- und Arbeitsrecht im Deutschen Hochschulverband

RA (syn.) Katharina Lemke, Justitiarin für Hochschul- und Arbeitsrecht im Deutschen Hochschulverband


Schedule:

09:00-10:00 Career Paths to a Professorship (Fr. Lemke)

   10:00-10:15 Break

10:15-11:45 Application for a Professorship (Fr. Dr. Adam)

   11:45-12:15 Break

12:15-13:15 Negotiations with the University (Legal Framework) (Fr. Lemke)

  13:15-13:30 Break

13:30-14:30 Negotiations with the University (Strategy) (Fr. Dr. Adam)


Included in the price:

Seminar documents in electronic form (via download).


Thursday, November 12, 2020

New paper: A computational evaluation of two models of retrieval processes in sentence processing in aphasia

Here is another important paper from my lab, led by my PhD student Paula Lissón, with a long list of co-authors.

This paper, which also depends heavily on the amazing capabilities of Stan, investigates the quantitative predictions of two competing models of retrieval processes, the cue-based retrieval model of Lewis and Vasishth, and the direct-access model of McElree.  We have done such an investigation before, in a very exciting paper by Bruno Nicenboim, using self-paced reading data from a German number interference experiment

What is interesting about this new paper by Paula is that the data come from individuals with aphasia and control participants. Such data is extremely difficult to collect, and as a result many papers report experimental results from a handful of people with aphasia, sometimes as few as 7 people. This paper has much more data, thanks to the hard work of David Caplan

The big achievements of this paper are that it provides a principled approach  to comparing the two competing models' predictions, and it derives testable predictions (which we are about to evaluate with new data from German individuals with aphasia---watch this space). As is always the case in psycholinguistics, even with this relatively large data-set, there just isn't enough data to draw unequivocal inferences. Our policy in my lab is to be upfront about the ambiguities inherent in the inferences. This kind of ambiguous conclusion tends to upset reviewers, because they expect (rather, demand) big-news results. But big news is, more often than not, just illusions of certainty, noise that looks like a signal (see some of my recent papers in the Journal of Memory and Language). We could easily have over-dramatized the paper and dressed it up to say way more than is warranted by the analyses.  Our goal here was to tell the story with all its uncertainties laid bare. The more papers one can put out there that make more measured claims, with all the limitations laid out openly, the easier it will be for reviewers (and editors!) to learn to accept that one can learn something important from a modeling exercise without necessarily obtaining a decisive result.

Download the paper from here: https://psyarxiv.com/r7dn5

A computational evaluation of two models of retrieval processes in sentence processing in aphasia

Abstract:

Can sentence comprehension impairments in aphasia be explained by difficulties arising from dependency completion processes in parsing? Two distinct models of dependency completion difficulty are investigated, the Lewis and Vasishth (2005) activation-based model, and the direct-access model (McElree, 2000). These models’ predictive performance is compared using data from individuals with aphasia (IWAs) and control participants. The data are from a self-paced listening task involving subject and object relative clauses. The relative predictive performance of the models is evaluated using k-fold cross validation. For both IWAs and controls, the activation model furnishes a somewhat better quantitative fit to the data than the direct-access model. Model comparison using Bayes factors shows that, assuming an activation-based model, intermittent deficiencies may be the best explanation for the cause of impairments in IWAs. This is the first computational evaluation of different models of dependency completion using data from impaired and unimpaired individuals. This evaluation develops a systematic approach that can be used to quantitatively compare the predictions of competing models of language processing.

Wednesday, November 11, 2020

New paper: Modeling misretrieval and feature substitution in agreement attraction: A computational evaluation

This is an important new paper from our lab, led by Dario Paape, and with Serine Avetisyan, Sol Lago, and myself as co-authors. 

One thing that this paper accomplishes is that it showcases the incredible expressive power of Stan, a probabilistic programming language developed by Andrew Gelman and colleagues at Columbia for Bayesian modeling. Stan allows us to implement relatively complex process models of sentence processing and test their performance against data. Paape et al show how we can quantitatively evaluate the predictions of different competing models.  There are plenty of papers out there that test different theories of encoding interference. What's revolutionary about this approach is that one is forced to make a commitment about one's theories; no more vague hand gestures. The limitations of what one can learn from data and from the models is always going to be an issue---one never has enough data, even when people think they do.  But in our paper we are completely upfront about the limitations; and all code and data are available at https://osf.io/ykjg7/ for the reader to look at, investigate, and build upon on their own.

Download the paper from here: https://psyarxiv.com/957e3/

Modeling misretrieval and feature substitution in agreement attraction: A computational evaluation

Abstract

 We present a self-paced reading study investigating attraction effects on number agreement in Eastern Armenian. Both word-by-word reading times and open-ended responses to sentence-final comprehension questions were collected, allowing us to relate reading times and sentence interpretations on a trial-by-trial basis. Results indicate that readers sometimes misinterpret the number feature of the subject in agreement attraction configurations, which is in line with agreement attraction being due to memory encoding errors. Our data also show that readers sometimes misassign the thematic roles of the critical verb. While such a tendency is principally in line with agreement attraction being due to incorrect memory retrievals, the specific pattern observed in our data is not predicted by existing models. We implement four computational models of agreement attraction in a Bayesian framework, finding that our data are better accounted for by an encoding-based model of agreement attraction, rather than a retrieval-based model. A novel contribution of our computational modeling is the finding that the best predictive fit to our data comes from a model that allows number features from the verb to overwrite number features on noun phrases during encoding.

Tuesday, November 10, 2020

Is it possible to write an honest psycholinguistics paper?

I'm teaching a new course this semester: Case Studies in Statistical and Computational Modeling. The idea is to revisit published papers and the associated data and code from the paper, and p-hack the paper creatively to get whatever result you like. Yesterday  I demonstrated that we could conclude whatever we liked from a recent paper that we had published; all conclusions (effect present, effect absent) were valid under different assumptions! The broader goal is to demonstrate how researcher degrees of freedom play out in real life.

Then someone asked me this question in the class:

Is it possible to write an honest psycholinguistics paper? 

The short answer is: yes, but you have to accept that some editors will reject your paper. If you can live with that, it's possible to be completely honest. 

Usually, the  only way to get a paper into a major journal is to make totally overblown claims that are completely unsupported or only very weakly supported by the data. If your p-value is 0.06 but  you want to claim it is significant, you have several options: mess around with the data till you push it below 0.05. Or claim "marginal significance". Or you can bury that result and keep redoing the experiment till it works. Or run the experiment till you get significance. There are plenty of tricks out there.

 If you got super-duper low p-values, you are on a good path to a top publication; in fact, if you have any  significant p-values (relevant to the question or not) you are on a good path to publication, because reviewers are impressed with p<0.05 somewhere, anywhere, in a table. That's why you will see huge tables in psychology articles, with tons and tons of p-values; the sheer force of low p-values spread out   over a gigantic table can convince the  reviewer to accept the paper, even though  only a single cell among dozens or hundreds in that table is actually testing the hypothesis. You can rely on the fact that nobody will think to ask whether power was low (the answer is usually yes), and how many comparisons were done.

Here are some examples of successes and failures, i.e., situations where we honestly reported what we found and were either summarily rejected or (perhaps surprisingly) accepted.

For example, in the following paper, 

Shravan Vasishth, Daniela Mertzen, Lena A. Jäger, and Andrew Gelman. The statistical significance filter leads to overoptimistic expectations of replicabilityJournal of Memory and Language, 103:151-175, 2018.

I wrote the following conclusion:

"In conclusion, in this 100-participant study we dont see any grounds for claiming an interaction between Load and Distance. The most that we can conclude is that the data are consistent with memory-based accounts such as the Dependency Locality Theory (Gibson, 2000), which predict increased processing difficulty when subject-verb distance is increased. However, this Distance effect yields estimates that are also consistent with our posited null region; so the evidence for the Distance effect cannot be considered convincing." 

Normally, such a tentative statement would lead to a rejection. E.g., here  is a statement  in another paper that led to a desk rejection (same editor) in the same journal where the above paper was published:

"In sum, taken together, Experiment 1 and 2 furnish some weak evidence for an interference effect, and only at the embedded auxiliary verb."

We published the above (rejected) paper in Cognitive Science instead.

In another example, both the key effects discussed in this paper would   have technically been  non-significant had we done a frequentist analysis.  The fact that we interpreted the Bayesian credible intervals with reference to a model's quantitative predictions doesn't change that detail. However, the paper was accepted:

Lena A. Jäger, Daniela Mertzen, Julie A. Van Dyke, and Shravan Vasishth. Interference patterns in subject-verb agreement and reflexives revisited: A large-sample studyJournal of Memory and Language, 111, 2020.

In the above paper, we were pretty clear about the fact that we didn't manage to achieve high enough power even in our large-sample study: Table A1 shows that for the critical effect we were studying, we probably had power between 25 and 69 percent, which is not dramatically high.

There are many other such examples from my lab, of papers accepted despite tentative claims, and papers rejected because of tentative claims. In spite of the  rejections, my plan is to continue telling the story like it is, with a limitations section. My hope is that editors will eventually understand the following point:

Almost no paper in psycholinguistics is going to give you a decisive result (it doesn't matter what the p-values are). So, rejecting a paper on the grounds that it isn't reporting a conclusive result is based on a misunderstanding about what we learnt from that paper. We almost never have conclusive results, even when  we claim we do. Once people realize that, they will become more comfortable accepting more realistic conclusions from data. 

Wednesday, September 16, 2020

Zoom link for my talk: Twenty years of retrieval models

Here is the zoom registration link to my talk at UMass on Sept 25, 21:30 CEST (15:30 UMass time).
Title: Twenty years of retrieval models
Abstract:
After Newell wrote his 1973 article, "You can't play twenty questions with nature and win", several important cognitive architectures emerged for modeling human cognitive processes across a wide range of phenomena. One of these, ACT-R, has played an important role in the study of memory processes in sentence processing. In this talk, I will talk about some important lessons I have learnt over the last 20 years while trying to evaluate ACT-R based computational models of sentence comprehension. In this connection, I will present some new results from a recent set of sentence processing studies on Eastern Armenian.
Reference: Shravan Vasishth and Felix Engelmann. Sentence comprehension as a cognitive process: A computational approach. 2021. Cambridge University Press. https://vasishth.github.io/RetrievalModels/ Zoom registration link:
You are invited to a Zoom webinar. When: Sep 25, 2020 09:30 PM Amsterdam, Berlin, Rome, Stockholm, Vienna Topic: UMass talk Vasishth
Register in advance for this webinar: https://zoom.us/webinar/register/WN_89F7BObjSwmxnK6DRC9fuQ
After registering, you will receive a confirmation email containing information about joining the webinar.

Tuesday, September 15, 2020

Twenty years of retrieval models: A talk at UMass Linguistics (25 Sept 2020)

I'll be giving a talk at UMass' Linguistics department on 25 September, 2020, over zoom naturally. Talk title and abstract below:
Twenty years of retrieval models
Shravan Vasishth (vasishth.github.io)
After Newell wrote his 1973 article, "You can't play twenty questions with nature and win", several important cognitive architectures emerged for modeling human cognitive processes across a wide range of phenomena. One of these, ACT-R, has played an important role in the study of memory processes in sentence processing. In this talk, I will talk about some important lessons I have learnt over the last 20 years while trying to evaluate ACT-R based computational models of sentence comprehension. In this connection, I will present some new results from a recent set of sentence processing studies on Eastern Armenian.
Reference: Shravan Vasishth and Felix Engelmann. Sentence comprehension as a cognitive process: A computational approach. 2021. Cambridge University Press. https://vasishth.github.io/RetrievalModels/

Monday, September 07, 2020

Registration open for two statistics-related webinars: SMLP Wed 9 Sept, and Fri 11 Sept 2020

As part of the summer school in Statistical Methods for Linguistics and Psychology, we have organized two webinars that anyone can attend. However, registration is required. Details below

Keynote speakers

  • Wed 9 Sept, 5-6PM:Christina Bergmann (Title: The "new" science: transparent, cumulative, and collaborative)
    Register for webinar: here
    Abstract: Transparency, cumulative thinking, and a collaborative mindset are key ingredients for a more robust foundation for experimental studies and theorizing. Empirical sciences have long faced criticism for some of the statistical tools they use and the overall approach to experimentation; a debate that has in the last decade gained momentum in the context of the "replicability crisis." Culprits were quickly identified: False incentives led to "questionable research practices" such as HARKing and p-hacking and single, "exciting" results are over-emphasized. Many solutions are gaining importance, from open data, code, and materials - rewarded with badges - over preregistration to a shift away from focusing on p values. There are a host of options to choose from; but how can we pick the right existing and emerging tools and techniques to improve transparency, aggregate evidence, and work together? I will discuss answers fitting my own work spanning empirical (including large-scale), computational, and meta-scientific studies, with a focus on strategies to see each study for what it is: A single brushstroke of a larger picture.
  • Fri 11 Sept, 5-6PM: Jeff Rouder Title: Robust cognitive modeling
    Register for webinar: here
    Abstract: In the past decade, there has been increased emphasis on the replicability and robustness of effects in psychological science. And more recently, the emphasis has been extended to cognitive process modeling of behavioral data under the rubric of “robust models." Making analyses open and replicable is fairly straightforward; more difficult is understanding what robust models are and how to specify and analyze them. Of particular concern is whether subjectivity is part of robust modeling, and if so, what can be done to guard against undue influence of subjective elements. Indeed, it seems the concept of "researchers' degrees of freedom" plays writ large in modeling. I take the challenge of subjectivity in robust modeling head on. I discuss what modeling does in science, how to specify models that capture theoretical positions, how to add value in analysis, and how to understand the role of subjective specification in drawing substantive inferences. I will extend the notion of robustness to mixed designs and hierarchical models as these are common in real-world experimental settings.

Jeff Rouder's keynote address at AMLaP 2020: Qualitative vs. Quantitative Individual Differences: Implications for Cognitive Control

 For various reasons, Jeff Rouder could not present his keynote address live. 

Here it is as a recording

Qualitative vs. Quantitative Individual Differences: Implications for Cognitive Control

Jeff Rouder (University of Missouri) rouderj@missouri.edu

Consider a task with a well-established effect such as the Stroop effect. In such tasks, there is often a canonical direction of the effect—responses to congruent items are faster than incongruent ones. And with this direction, there are three qualitatively different regions of performance: (a) a canonical effect, (b) no effect, or (c) an opposite or negative effect (for Stroop, responses to incongruent stimuli are faster than responses to congruent ones). Individual differences can be qualitative in that different people may truly occupy different regions; that is, some may have canonical effects while others may have the opposite effect. Or, alternatively, it may only be quantitative in that all people are truly in one region (all people have a true canonical effect). Which of these descriptions holds has two critical implications. The first is theoretical: Those tasks that admit qualitative differences may be more complex and subject to multiple processing pathways or strategies. Those tasks that do not admit qualitative differences may be explained more universally. The second is practical: it may be very difficult to document individual differences in a task or correlate individual differences across task if these tasks do not admit qualitative individual differences. In this talk, I develop trial-level hierarchical models of quantitative and qualitative individual differences and apply these models to cognitive control tasks. Not only is there no evidence for qualitative individual differences, the quantitative individual differences are so small that there is little hope of localizing correlations in true performance among these tasks.


Sunday, September 06, 2020

Some thoughts on teaching statistics courses online

 


Someone asked to write down how I teach online. Because of corona, I have moved all my courses at the university online, and as a consequence I had to clean up my act and get things in order. 

The first thing I did was record all my lectures in advance.  This was a hugely time-consuming enterprise.  I bought a licence for screencast-o-matic, which is something like 15 Euros a year, and a Blue Yeti microphone (144 Euros, including express shipping). I already have a Logitech HD 1080p camera. I also bought a Windows (Dell) tablet computer through the university, so I could write freehand with an electronic pen. Somehow, writing freehand during a lecture solidifies understanding in the student's mind in a way that a mere slide presentation does not. I don't know why this is the case but I firmly believe one should show derivations in real time.

The way I do my recordings is that I start screencast-o-matic (the new Mac OS X makes this incredibly hard, you have to repeatedly open the settings and give the software permission to record--thanks, Apple).  Then, I record the lecture in one shot, no editing at all. If I make a mistake during the lecture, I just live with it (and sometimes the mistakes are horrendous). Sometimes my cat Molly video-bombs my lectures, I just let it all happen. All this makes my video recordings less than professional looking, but I think it's good enough. Nobody has complained about this so far. I use Google Chrome's Remote Desktop feature to link my Macbook Pro with the Windows machine, and switch between RStudio on the Mac and the Windows tablet for writing. On Windows, I use the infinite writing space provided by OneNote. For writing on pdfs, I use the PDF reader by Xodo.


Here are my videos from my frequentist course:

https://vasishth.github.io/IntroductionStatistics/

The way students are expected to work is to watch the videos, and then do exercises that I give out. My lecture notes provide a written record of the material, plus the exercises:

https://vasishth.github.io/Freq_CogSci/

 The solutions are given out after the submission deadline. In my courses, I stipulate that you can only take the class if you commit to doing at least 80% of the homework. I force people to quit the class if they don't do the HW; many people try to audit the classes without doing the HW. In my experience, they don't get anything out of the class, so I don't allow audits without doing the HW. This is a very effective strategy, because it forces the students to engage. One rule I have is that if you submit the HW and make an honest attempt to solve the problems you will get 100% on the HW no matter what.  This decouples learning from grades and reduces student stress considerably, and allows them to actually learn the material. Some students complain that the HW is hard; but it's supposed to make them think, and there is no shame in not being able to do it. Some students are unable to adjust to the fact that not everything will be easy to do.

Two other components of the class are (a) weekly meetings over zoom, where students can ask me anything, and (b) an online discussion forum where people can post questions. Students used these options really intelligently, and although I had to spend a lot of time answering questions on the forum, I think on balance it was worth the effort. I think the students got a lot out of my courses, judging from the teaching evaluations (here and here).

The main takeaway for me was that the online component of these stats courses that I teach is crucial for student learning, and in future editions of my courses, there will always be an online component. One day we will have face to face classes, and I think those are very valuable for establishing human contact. But the online component really adds value, especially the pre-recorded lectures and the discussion forum.

 

Some thoughts on the completely online Architectures and Mechanisms of Language Processing conference

 Some time ago, I wrote a blog post on the carbon cost of conferences:

https://vasishth-statistics.blogspot.com/2019/10/estimating-carbon-cost-of.html

The background for this post was that at the time I was in the process of organizing the AMLaP 2020 conference, and was beginning to wonder whether these international conferences are even sustainable given the climate crisis unfolding.  In discussions with others, one question someone raised was: what is the actual carbon cost of conferences? This  made me curious to find out what the rough carbon cost would be, hence the above-linked post. At the time, it didn't even occur to me that a viable alternative could be a completely online conference. 

But then corona happened, and Brian Dillon moved CUNY completely online. I didn't attend that conference because I was going through a medical crisis at the time.  But around that time I realized that I would have to move AMLaP online as well. By then my medical situation was going from bad to worse, so I handed over control to Titus von der Malsburg. Titus masterfully navigated all the obstacles to get AMLaP up and running, helped by a large team consisting of my lab members and several other department members. I was pretty amazed to see how superbly organized and well-coordinated this team was. 

Having attended this and a satellite conference, SAFAL, online, I have to admit that an online conference just doesn't have the same look and feel of a real conference. It's just something different to sit down with colleagues from all over the world and chat with them over a beer. An online conversation over  zoom just doesn't cut it. However, if we want to take the carbon cost issue seriously, I feel that online conferences are here to stay. At the very least, it should be possible in the future to allow for hybrid conferences; people should be able to participate (and I mean, ask questions after talks and meet people) from a remote place. I got several emails and other types of messages from people telling me they could only participate because AMLaP was online; some were pregnant and unable to travel, some (like me) had too serious a medical condition to allow them to travel, and some just don't have the money to go to a conference. Interestingly, Indian psycholinguists from India were well-represented at AMLaP, I think for the first time (I didn't have any direct hand in making this happen, the Indians are an emerging group of highly competent and sophisticated psycholinguists). So I think the online format makes the conference more inclusive as well. 

One further thing many people noticed is that younger people were asking more questions after talks than in physical conferences. In physical psycholinguistic conferences, sometimes senior people dominate in the discussions. This isn't even possible to do in an online conference because the moderators have total control over which question is asked and by whom. But it seemed like it was mostly younger people who felt comfortable asking questions online; I saw very few questions from senior people. This is good news, because the younger people should be out there engaging with the field. 

This year, we we used gather.town to socialize. Take a look at it. Initially I was skeptical this would allow for much socializing, but it worked surprisingly well. I noticed that some of the young people were hesitating to approach older ones, so I boldly went up to them and talked with them. It worked well; I met several young MSc and early PhD students. I also met up with colleagues I haven't seen for over a decade I think (Tessa Warren for example). It was nothing like face-to-face meetings but it was still fun and better than nothing. Pro tip: you can make your avatar on gather.town dance by pressing the z button. Cool. Brian Dillon, Dustin Chacon, and I had a brief dance party (no music though).  You get little hearts getting bigger and bigger over your avatar's head if you dance. Neat.

So overall, despite the huge disadvantage that one can't meet people in person, there is enough gain from running conferences online that all  future conferences should have at least a live streaming component. The talks should be on twitch or some other platform, and they should be recorded and stored online for everyone to view. This will create a more inclusive environment and can only be good for the field. As a side effect, it is also positive thing we can do towards reducing the effects of the climate crisis. Every little bit counts.

You can watch the  conference recording on twitch. A more permanent recording will appear on the amlap2020.org home page eventually.


Saturday, August 29, 2020

Two interesting conferences are happening next week at Potsdam (Germany): SAFAL and AMLaP

 Psycholinguists worldwide will be interested in attending two conferences that are starting online next week. Registration is free for both.

1. South Asian Forum on the Acquisition and Processing of Language (SAFAL)

https://sites.google.com/view/safal2020/home

This conference, running from 31st August to 2nd September, is going to be all about language processing in South Asian languages. South Asia is a hugely understudied area in psycholinguistics; this conference is going to showcase some of the new and important work coming out of this part of the world.

2. Architectures and Mechanisms for Language Processing (AMLaP)

https://amlap2020.org/

This is the biggest European conference on psycholinguistics. We have a special session on Computational Models of Language Processing. Five keynotes from leading scientists, and 25 talks, plus lots of posters.  I look forward to meeting everyone from psycholinguistics virtually.

Thursday, August 20, 2020

Summer school: Statistical Methods for Linguistics and Psychology, 2020

 The summer school website has been updated with the materials (lecture notes, exercises, and videos) for the Introductory frequentist and Bayesian streams. Details here:

https://vasishth.github.io/smlp2020/ 

Wednesday, August 19, 2020

Two keynote lectures at the Fourth Summer School on Statistical Methods for Linguistics and Psychology, 7-11 September 2020

 We have two interesting zoom talks at the SMLP summer school, which is being held fully online this year. In my next post, I will be posting all the lecture materials for two of the four streams: Frequentist Foundations, and Introduction to Bayesian Data Analysis.

Two keynote lectures may be of general interest to the public (zoom link will be provided in this post closer to the date):

Wednesday 9 Sept, 5PM CEST (Berlin time):


Christina Bergmann (Title: The "new" science: transparent, cumulative, and collaborative)

Abstract: Transparency, cumulative thinking, and a collaborative mindset are key ingredients for a more robust foundation for experimental studies and theorizing. Empirical sciences have long faced criticism for some of the statistical tools they use and the overall approach to experimentation; a debate that has in the last decade gained momentum in the context of the "replicability crisis." Culprits were quickly identified: False incentives led to "questionable research practices" such as HARKing and p-hacking and single, "exciting" results are over-emphasized. Many solutions are gaining importance, from open data, code, and materials - rewarded with badges - over preregistration to a shift away from focusing on p values. There are a host of options to choose from; but how can we pick the right existing and emerging tools and techniques to improve transparency, aggregate evidence, and work together? I will discuss answers fitting my own work spanning empirical (including large-scale), computational, and meta-scientific studies, with a focus on strategies to see each study for what it is: A single brushstroke of a larger picture.


Friday 11 Sept, 5PM CEST (Berlin time):

Jeff Rouder Title: Robust cognitive modeling 

Abstract: In the past decade, there has been increased emphasis on the replicability and robustness of effects in psychological science. And more recently, the emphasis has been extended to cognitive process modeling of behavioral data under the rubric of “robust models." Making analyses open and replicable is fairly straightforward; more difficult is understanding what robust models are and how to specify and analyze them. Of particular concern is whether subjectivity is part of robust modeling, and if so, what can be done to guard against undue influence of subjective elements. Indeed, it seems the concept of "researchers' degrees of freedom" plays writ large in modeling. I take the challenge of subjectivity in robust modeling head on. I discuss what modeling does in science, how to specify models that capture theoretical positions, how to add value in analysis, and how to understand the role of subjective specification in drawing substantive inferences. I will extend the notion of robustness to mixed designs and hierarchical models as these are common in real-world experimental settings. 

Saturday, April 04, 2020

Developing the right mindset for learning statistics: Some suggestions

Developing the right mindset for learning statistics: Some suggestions

Introduction

Over the last few decades, statistics has become a central part of the linguist’s toolkit. In psychology, there is a long tradition of using statistical methods for data analysis, but linguists and other cognitive scientists are relative newcomers to this area, and the formal statistics coursework provided in graduate programs is still quite sketchy. For example, as a grad student at Ohio State, in 1999 or 2000 or so, I did a four-week intensive course on statistics, after which I could do t-tests and ANOVAs on my data using JMP. Even in psychology departments, the amount of exposure students get to statistics varies a lot.

As part of Potsdam’s graduate linguistics/cognitive science/cognitive systems programs, we teach a sequence of five courses involving data analysis and statistics:

  • (Winter) Statistical data analysis 1
  • (Winter) Bayesian statistical inference 1
  • (Winter) Case studies in psycholinguistics
  • (Summer) Statistical data analysis 2
  • (Winter) Bayesian statistical inference 2

In addition, we teach (in winter) a Foundations of Mathematics course that covers undergraduate calculus, probability theory, and linear algebra. This course is designed for people who plan to take the machine learning classes in computer science, as part of the MSc in Cognitive Systems.

Students sometimes have difficulties while doing these courses. This is because there is an art to taking these courses that is not obvious. This short note is aimed at spelling out some important aspects of this art.

In my experience, anyone can learn this way of approaching the study of statistics, which is inherently difficult. Keep in mind that when learning something new, one might not understand everything, but that’s OK. The whole world is built on partial understanding (I myself have only a very incomplete picture of statistics, and it’s likely to stay that way). Someone once told me that that the key difference between a mathematician and a “normal” preson is that the mathematician will keep reading or listening even if they are not following the details of the presentation. One can learn to become comfortable with partial understanding, safe in the knowledge that one can come back to the open questions later.

Below, I am shamelessly going to borrow from this (to my mind) classic book:

Burger, E. B., & Starbird, M. (2012). The 5 elements of effective thinking. Princeton University Press.

I strongly advise you to read the Burger and Starbird book; it’s short and very practically oriented. I re-read it once a year on average just to remind myself of the main ideas.

My comments below are specifically oriented towards the learning of statistics as my colleagues and I teach it at Potsdam, so my examples are very specifically about the material I teach. The examples are really the only thing I add beyond what’s in the Burger and Starbird book.

Developing the right mindset: A checklist

Understand the “easy” stuff deeply

Ask yourself: when starting the study of statistics, what is the basic knowledge I will need (I review all these topics in my introductory classes)? You will not be in a position to answer this question when you start your studies, but after completing one or two courses you should revisit this question.

  • The basic elements of probability theory (sum rule, product rule, conditional probability, law of total probability)
  • Basic high-school algebra (e.g., given \(y = \frac{x}{1-x}\), solve for \(x\))
  • How to deal with exponents: \(x^2 \times x^3 = ?\) Is it \(x^5\) or \(x^6\)? We learnt this in school but we forgot it because we didn’t use it for many years. But now we need this knowledge!
  • What is a log? What is log(1)? What is log(0)? How to find out if one has forgotten?
  • What is a probability distribution? This requires some careful navigation. The key concepts here are the probability mass function (discrete case), probability density functions (continuous case), cumulative distribution functions. In bivariate/multivariate distributions, conditional, marginal, and joint distributions must be well-understood intuitively. The key here is to develop graphical intuition, using simulation. I teach this approach in my courses. Statisticians use calculus when discussing the properties of probability distributions. However, we can do all this graphically and lose no information. In practice, we rarely or never need to do any analytical work involving mathematical derivations; the software does all the work. However, it is important to understand the details intuitively, and here figures help a lot. A basic rule of thumb is: whenever trying to understand something, try to visualize it graphically. Even something mundane like repeated coin tosses can be graphically visualized, and then everything becomes clear.

Going back repeatedly to these foundational ideas as one advances through the courses is very important. The goal should be to internalize them deeply, through graphical intuition.

Mistakes are your friend and teacher

Throughout our school years, we are encouraged to deliver the right answers, and penalized for delivering wrong answers. This style of schooling misses the point that mistakes can teach us more than our correct answers, if we compare the expectd answer with ours and try to work out what we got wrong and why. This is called “error learning” or something like that in machine learning, and it works with humans too. Don’t be afraid to make mistakes, but try to make only new mistakes, and keep learning from them.

Students generally assume that I will judge them if they get something wrong. This is a false impression. As I say above, you can learn more from a mistake than from a correct answer. In my own studies of statistics, you can see that my grades are not stellar, they are all online:

https://vasishth-statistics.blogspot.com/2015/02/getting-statistics-education-review-of.html

Despite my mediocre grades, I still learnt a lot. Similarly, in graduate school, at Ohio State, my grades were just OK to so-so, nothing to write home about. In computer science (Ohio State), my grades were usually in the range of B+. I rarely got an A-. I still learnt important and useful stuff.

How to develop curiosity: Solve the same problem more than one way, and generate your own questions

The Burger and Starlight book encourages the reader to become curious about a problem. Here, I suggest a very concrete strategy, e.g., when doing homework assignments.

  • First, create some mental space and time. Don’t try to squeeze the homework assignment into the last two hours before the submission deadline. Create a clear day ahead of you to explore a problem. I know that courses are designed these days to require at most 2-3 hours of work per week at home. This is an unfortunate productionalization of education that is now hurting the education system in Europe. If you need to stick to that tght schedule, do what you can in the limited time, but even there it is good to not leave the work to the last hours before submission. If you create more time, use it to explore in the following way.
  • Second, assuming you have some extra time, try to solve the given problem using different approaches. E.g., if the assignment asks you to use a lognormal likelihood in a linear mixed model, ask yourself if there is some way to solve the problem with the standard normal likelihood. If the problem asks you to work with brms, try to also solve the problem using Stan or even rstanarm, even if the assignment doesn’t ask you to do this. You are doing this for yourself, not for submitting the assignment. Even if the assignment doesn’t ask you to change the priors in a model, fool around with them to see what happens to the posteriors. If there is an LKJ(2) prior on a correlation parameter in the linear mixed model, find out what happens if you use LKJ(0.5) or LKJ(10). Etc.
  • Ask yourself what-if questions. Suppose you are learning about power analysis using simulation, a topic I cover in all my advanced classes, Bayesian or frequentist. This topic is ripe for exploration. Power depends essentially on three variables: effect size, sample size, and standard deviation. That is a fertile playground! I have spent so much time playing with power analyses that I can give ballpark estimates for my research problems quite accurately, without any simulation (of course, I always check my answers using simulation!). There are actually several different ways to compute power; you can use power.t.test, you can do it using simulation, etc. This topic is perfect for developing a sense of curiosity, but youc an do this for really any topic.

Keep careful notes

Statistics is not to be trifled with. I don’t expect anyone to memorize any formulas, but the logic of the analytical steps can get confusing. Keep good records of your learning. As an example, here is my entire record of four years of formal statistics study at the University of Sheffield (I did an MSc online, part time). These are cheat sheets I prepared while studying:

https://github.com/vasishth/MScStatisticsNotes

These notes are way more mathematical than anything I will teach at Potsdam. However, the principle is: organize your understanding of the material yourself. Don’t just let the teacher organize it for you (the teacher does do that, through slides and lecture notes!). We only understand things if we can actively produce and reorganize them ourselves.

Have a real problem you want to solve, and start simple

Usually, you will learn the most when you are desperate to get the answer to a data analysis problem. You will be working in a very small world of your own, and you know your problem, you are motivated to solving it. This is very different from homework assignments given out of the blue by the teacher. For this reason, especially in statistics courses, it is useful to come to the course with a specific problem you want to solve. As the course unfolds, apply the methods you learn to your problem. For example, suppose your supervisor has already told you that you need to fit a generalized linear mixed model with a logit link function to the data. Where to start?

Suppose you are taking a frequentist course and know that at the end of the course you need to be able to complete the data analysis your supervisor asked you to deal with. You can start by simplifying the problem radically and working with what you already know. Could you run a t-test instead? It doesn’t matter that someone told you that that’s the wrong test; we are playing here. Could you just fit a simple linear model (again wrong, but this is exploration). Just these two exercises will leave us with a lot of interesting insights to explore. Once you learn about linear mixed models, you can start exploring whether you can fit the model with the standard lmer function and what it would tell you. Once you reach that point, you are close to getting to the analysis you were told to do. Even if I don’t teach it in class, you can use the last trick to get there, which I discuss next.

“Let me google that for you”: Learn to find information

Any time someone asks you a question you consider easily answered by googling, and you feel like being mean, you can use this website to deliver a sarcastic response: https://lmgtfy.com/. You simply type in the question, and then send the link to the person asking the question. When they click on it, the question is typed into the google search window, and you are invited to click on the search button. It’s a pretty passive aggressive thing to do, and I advise you to never use this approach. :)

But despite the nasty aspect of the LMFTY website, it does illustrate an important point: these days you can find a lot of information online. Here are some ways that I use the internet:

  • When I get an error message in RStudio I don’t understand (this happens pretty much daily), I just copy it and paste it into google’s search engine. Almost always, someone has had that same problem before and posted a solution. You have to be patient sometimes and look at a lot of the search engine results; but eventually you will find the answer. One gets better at this with experience. Sometimes one can’t solve the problem (e.g., I have a minor ongoing problem with Cairo fonts); it’s OK to give up and move on when it isn’t critical to the work one is doing.
  • For Bayesian data analysis, there are online forums one can ask questions at. E.g., discourse.mc-stan.org for Stan. For frequentist questions, there are R mailing lists (exercise: google them!).
  • Stackexchange. I have gotten authoritative answers from distinguished scientists about math problems that I don’t have the technical knowledge to solve. Often, someone else has asked a similar question already, so it can happen that one doesn’t even need to ask.
  • Google scholar gives you access to scientific articles via keyword search.
  • Blogs: I use Feedly to follow R-bloggers and other blogs like Andrew Gelman’s. Over time I have learnt a lot from reading blog posts.

Obviously, googling is not a fail-safe strategy. Sometimes you will get incorrect information. What I generally do is try to cross-check any technical claims from other sources like textbooks.

A common complaint in my statistics courses is that I don’t teach enough R. That’s because one can never teach enough R. One has to keep looking stuff up as needed; this is the skill that I am suggesting that you acquire.

Look for connections between ideas

Often, statistics is taught like a random catalogue of tests: t-test, ANOVA, linear mixed model, Fisher exact test, etc., etc. Interestingly, however, many of these seemingly disparate ideas have deep connections. The t-value and the F-score are connected; the t-test and the linear mixed model are connected. Figuring out these relationships analytically is not difficult but one needs some background to work it out. For example, see

https://vasishth-statistics.blogspot.com/2018/04/a-little-known-fact-paired-t-test-is.html

Even if one doesn’t know enough to carry out this analytical derivation, one can play with data to get a feel for the connection. The way I first got a hint about the t-test and linear mixed model connection (discussed above analytically) was by simulating data and then analyzing it two different ways (t-test vs linear mixed model), and getting the exact same statistics. It was only much later that I saw how to work this out analytically. The point is that simulation will get you very far in such investigations. You may not be able to prove stuff mathematically (I usually can’t), but you can still gain insight.

Getting further in your study of statistics

It is possible to take the Potsdam courses and do solid statistical analyses. However, if you get curious about the underlying mathematics, or want to read more advanced textbooks, or want to get into the machine learning field, we teach a Foundations of Mathematics course that graduate students can take. Historically, people have benefitted from taking this course even if they had no previous math exposure in university. So this course is definitely optional and most people can skip it; but it’s available for anyone interested in going deeper.