The Sixth Summer
School on Statistical Methods for Linguistics and Psychology will be
held in Potsdam, Germany, September 12-16, 2022. Like the previous
editions of the summer school, this edition will have two frequentist
and two Bayesian streams. Currently, this summer school is being planned
as an in-person event.
The application form closes April 1, 2022. We will announce
the decisions on or around April 15, 2022.
Course fee: There is no fee because the summer school is funded by the Collaborative Research Center (Sonderforschungsbereich 1287). However, we will charge 40 Euros to cover costs for coffee and snacks during the breaks and social hours. And participants will have to pay for their own accommodation.
For details, see: https://vasishth.github.io/smlp2022/
Curriculum:
1. Introduction to Bayesian data analysis (maximum 30 participants). Taught by Shravan Vasishth, assisted by Anna Laurinavichyute, and Paula Lissón
This course is an introduction to Bayesian modeling, oriented towards
linguists and psychologists. Topics to be covered: Introduction to
Bayesian data analysis, Linear Modeling, Hierarchical Models. We will
cover these topics within the context of an applied Bayesian workflow
that includes exploratory data analysis, model fitting, and model
checking using simulation. Participants are expected to be familiar with
R, and must have some experience in data analysis, particularly with
the R library lme4.
Course Materials
Previous year's course web page:
all materials (videos etc.) from the previous year are available here.
Textbook:
here. We will work through the first six chapters.
This course assumes that participants have some experience in Bayesian
modeling already using brms and want to transition to Stan to learn more
advanced methods and start building simple computational cognitive
models. Participants should have worked through or be familiar with the
material in the first five chapters of our book draft:
Introduction to Bayesian Data Analysis for Cognitive Science.
In this course, we will cover Parts III to V of our book draft: model
comparison using Bayes factors and k-fold cross validation, introduction
and relatively advanced models with Stan, and simple computational
cognitive models.
Course Materials
Textbook
here. We
will start from Part III of the book (Advanced models with Stan).
Participants are expected to be familiar with the first five chapters.
Participants will be expected to have used linear mixed models before, to the level of the textbook by
Winter (2019, Statistics for Linguists),
and want to acquire a deeper knowledge of frequentist foundations, and
understand the linear mixed modeling framework more deeply. Participants
are also expected to have fit multiple regressions. We will cover model
selection, contrast coding, with a heavy emphasis on simulations to
compute power and to understand what the model implies. We will work on
(at least some of) the participants' own datasets.
This course is not appropriate for researchers new to R or to frequentist statistics.
Course Materials
Textbook draft
here.
4.
Advanced methods in frequentist statistics with Julia (maximum 30 participants). Taught by
Reinhold Kliegl,
Phillip Alday,
Julius Krumbiegel, and
Doug Bates.
Applicants must have experience with linear mixed models and be interested in learning how to carry out such analyses with the
Julia-based MixedModels.jl package)
(i.e., the analogue of the R-based lme4 package). MixedModels.jl has
some significant advantages. Some of them are: (a) new and more
efficient computational implementation, (b) speed — needed for, e.g.,
complex designs and power simulations,
(c) more flexibility for selection of parsimonious mixed models, and
(d) more flexibility in taking into account autocorrelations or other
dependencies — typical EEG-, fMRI-based time series (under
development).
We
do not expect profound knowledge of Julia from
participants; the necessary subset of knowledge will be taught on the
first day of the course. We do expect a readiness to
install Julia
and the confidence that with some basic instruction participants will
be able to adapt prepared Julia scripts for their own data or to adapt
some of their own lme4-commands to the equivalent
MixedModels.jl-commands. The course will be taught in a hybrid IDE.
There is already the option to execute R chunks from within Julia,
meaning one needs Julia primarily for execution of MixedModels.jl
commands as replacement of lme4. There is also an option to call
MixedModels.jl from within R and process the resulting object like an
lme4-object. Thus, much of pre- and postprocessing (e.g., data
simulation for complex experimental designs; visualization of
partial-effect interactions or shrinkage effects) can be carried out in
R.
Course Materials
Github repo:
here.