We have two interesting zoom talks at the SMLP summer school, which is being held fully online this year. In my next post, I will be posting all the lecture materials for two of the four streams: Frequentist Foundations, and Introduction to Bayesian Data Analysis.
Two keynote lectures may be of general interest to the public (zoom link will be provided in this post closer to the date):
Wednesday 9 Sept, 5PM CEST (Berlin time):
Christina Bergmann (Title: The "new" science: transparent, cumulative, and collaborative)
Abstract:
Transparency, cumulative thinking, and a collaborative
mindset are key ingredients for a more robust foundation for
experimental studies and theorizing.
Empirical sciences have long faced criticism for some of the statistical
tools they use and the overall approach to experimentation; a debate
that has in the last decade gained momentum in the context of the
"replicability crisis." Culprits were quickly identified: False
incentives led to "questionable research practices" such as HARKing and
p-hacking and single, "exciting" results are over-emphasized. Many
solutions are gaining importance, from open data, code, and materials -
rewarded with badges - over preregistration to a shift away from
focusing on
p values. There are a host of options to choose from; but how can we
pick the right existing and emerging tools and techniques to improve
transparency, aggregate evidence, and work together? I will discuss
answers fitting my own work spanning empirical (including large-scale),
computational, and meta-scientific studies, with a focus on strategies
to see each study for what it is: A single brushstroke of a larger
picture.
Friday 11 Sept, 5PM CEST (Berlin time):
Abstract:
In the past decade, there has been increased emphasis on the
replicability and robustness of effects in psychological science. And
more recently, the emphasis has been extended to cognitive process
modeling of behavioral data under the rubric of “robust models." Making
analyses open and replicable is fairly straightforward; more difficult
is understanding what robust models are and how to specify and analyze
them. Of particular concern is whether subjectivity is part of robust
modeling, and if so, what can be done to guard against undue influence
of subjective elements. Indeed, it seems the concept of "researchers'
degrees of freedom" plays writ large in modeling.
I take the challenge of subjectivity in robust modeling head on. I
discuss what modeling does in science, how to specify models that
capture theoretical positions, how to add value in analysis, and how to
understand the role of subjective specification in drawing substantive
inferences. I will extend the notion of robustness to mixed designs and
hierarchical models as these are common in real-world experimental
settings.