tag:blogger.com,1999:blog-21621108.post8554120245631879035..comments2023-03-25T14:52:12.967+01:00Comments on Shravan Vasishth's Slog (Statistics blog): Simulating scientists doing experimentsShravan Vasishthhttp://www.blogger.com/profile/13453158922142934436noreply@blogger.comBlogger4125tag:blogger.com,1999:blog-21621108.post-22250878248332589272014-11-25T12:22:25.988+01:002014-11-25T12:22:25.988+01:00Thanks for these comments. My stringent condition ...Thanks for these comments. My stringent condition (sorry for not being clear) that if an experimenter gets a p-value < 0.05, they must repeat the experiment AND get the p-value to be <0.05 once again (successively). This is very hard to get in psycholinguistics, in real life! That's why I called in stringent. Shravan Vasishthhttps://www.blogger.com/profile/13453158922142934436noreply@blogger.comtag:blogger.com,1999:blog-21621108.post-43877101823962063362014-11-22T16:50:59.242+01:002014-11-22T16:50:59.242+01:00One last comment. I tried to imagine the analytic ...One last comment. I tried to imagine the analytic calculation that includes that last condition "We will add the stringent condition that the scientist has to get one replication of a significant effect before they publish it."<br /><br />Again, I might be missing a point, it seems to me that the constraint is rather vague. What counts as a replication. For example, if a scientist can try a 198 times, and then gets a similar positive finding as the 1st attempt on the 199th retry, does that count as a replication? If that counts as a replication, then really the constraint is not stringent at all. It seems to me that more constraints are needed to implement what you have in mind. Perhaps, the replication counts only if it is in the immediately following experiment (or some such window)?karthik durvasulahttps://www.blogger.com/profile/14541529987768107005noreply@blogger.comtag:blogger.com,1999:blog-21621108.post-30209799270369499052014-11-22T16:36:19.982+01:002014-11-22T16:36:19.982+01:00I just realized that I didn't take into consid...I just realized that I didn't take into consideration the following condition "We will add the stringent condition that the scientist has to get one replication of a significant effect before they publish it."karthik durvasulahttps://www.blogger.com/profile/14541529987768107005noreply@blogger.comtag:blogger.com,1999:blog-21621108.post-16985565165920639272014-11-22T16:33:32.796+01:002014-11-22T16:33:32.796+01:00I am wondering if there is a simpler calculation. ...I am wondering if there is a simpler calculation. A false positive means you have claimed a difference when the null hypothesis is true. So, power and effect size are irrelevant to the calculation (because they depend on the alternative being true). Therefore, the relevant calculation depends primarily on the alpha criterion.<br /><br /><br />nScientists = 1000<br />alpha = 0.025 #two-tailed t-test<br />nExp=200<br />P_Null=0.2<br /><br />#First - What's the no false positive rate for one scientist<br />nNull=nExp*P_Null<br />P_NoFalsePositives_Onescientist=(1-alpha)^nNull<br /><br />#Second - What's the no false positive rate for N scientists<br />P_NoFalsePositives = P_NoFalsePositives_Onescientist^nScientists<br />(P_AtleastOneFalsePositive = 1 - P_NoFalsePositives)<br /><br /><br />"P_AtleastOneFalsePositive" will also be the proportion of scientists who will have at least one false positive in their lifetime. Given your values, it should be ~1.<br /><br />Have I misunderstood the question?karthik durvasulahttps://www.blogger.com/profile/14541529987768107005noreply@blogger.com