Bayesian retrospective power analysis
Shravan Vasishth
4/11/2018
Introduction
Robert Grant said recently on twitter that it would be great to do a Bayesian version of retrospective power analysis, but that he leaves it as an exercise for the reader. So I decided to take a shot at it.
Here is Robert leaving it as an exercise:
“I leave it as an exercise for the reader”
— Robert Grant (@robertstats) April 10, 2018
Power in a Bayesian setting would be the probability of getting a significant effect when the null hypothesis is in fact false. We can use Bayes rule to compute this. Let Type I error be \(\alpha=0.05\) and let Type II error be \(\beta\). The word ``sig’’ means significant at \(\alpha=0.05\).
\(Pr(sig | H_0 false) = \frac{Pr(H_0 false | sig)\times Pr(sig)}{Pr(H_0 false)}\)
where:
\(Pr(sig) = Pr(sig | H_0 true) Pr(H_0 true) + Pr(sig | H_0 false) Pr(H_0 false) = \alpha \times Pr(H_0 true) + (1-\beta) \times (1-Pr(H_0 true))\)
We need some prior probability of \(H_0\) being false. Just as an example, we can assume that the null is false with some probability \(\theta\)
\(H_0 \sim Bernoulli(\theta)\)
and that \(\theta \sim Beta(5,60)\).
## probability of null being false
h0<-rbeta(10000,shape1 = 5, shape2 = 60)
mean(h0)
## [1] 0.07709584
quantile(h0,prob=c(0.025,0.975))
## 2.5% 97.5%
## 0.02614536 0.15258372
hist(h0,freq=FALSE,xlab="beta(5,60)",main="Pr(H0 false)")
This assumes that the prior probability of the null being false is between 3 and 15% with probability 95% (roughly), with mean probability 8%.
Now,
\(Pr(H_0 false | sig) = \frac{Pr(sig|H_0 false)\times Pr(H_0 false)}{Pr(sig)} = \frac{(1-\beta)\times Pr(H_0 false)}{Pr(sig)}\)
where
\(Pr(sig) = Pr(sig | H_0 false) Pr(H_0 false) + Pr(sig | H_0 true) Pr(H_0 true) = (1-\beta) \times Pr(H_0 false) + \alpha \times (1-Pr(H_0 false))\)
So we can compute
\(Pr(sig | H_0 false) = \frac{Pr(H_0 false | sig)\times Pr(sig)}{Pr(H_0 false)}\)
We now compute this.
In this simulation, I assume that \(\beta \sim Beta(10,4)\). So Type II error is around 70%.
nsim<-100000
post1<-post2<-rep(NA,nsim)
alpha <- 0.05
for(i in 1:nsim){
beta <- rbeta(1,shape1=10,shape2=4)
theta<-rbeta(1,shape1 = 5,shape2 = 60)
sig1<-(1-beta)*theta + alpha*(1-theta)
post1[i]<-(1-beta)*theta/sig1
sig2<- alpha*(1-theta) + (1-beta)*theta
post2[i]<- post1[i]*sig2/theta
}
summary(post2)
## Min. 1st Qu. Median Mean 3rd Qu. Max.
## 0.01177 0.19844 0.27486 0.28545 0.36148 0.79079
So that’s one way to compute the ``posterior power’’. Of course, strictly speaking this is all nonsense, as I’m treating the null hypothesis as a random variable. But if we are willing to go that far, we can in principle compute posterior distributions of the probability of getting a significant effect given that the null is false.
If I have made a mistake somewhere please feel free to comment.
1 comment:
Robert posted this today: https://robertgrantstats.wordpress.com/2018/04/12/retrospective-power-revisited/
Post a Comment