University of Edinburgh
2025-05-01
Suppose you have a treatment that you suspect may alter performance on a certain task. You compare the means of your control and experimental groups (say 20 subjects in each sample). Further, suppose you use a simple independent means t-test and your result is significant (t = 2.7, d.f. = 18, p = 0.01).
You have absolutely disproved the null hypothesis (that is, there is no difference between the population means).
You have found the probability of the null hypothesis being true.
You have absolutely proved your experimental hypothesis (that there is a difference between the population means).
You can deduce the probability of the experimental hypothesis being true.
You know, if you decide to reject the null hypothesis, the probability that you are making the wrong decision.
You have a reliable experimental finding in the sense that if, hypothetically, the experiment were repeated a great number of times, you would obtain a significant result on 99% of occasions.
Gigerenzer (2004), Gigerenzer, Krauss & Vitouch (2004), Gigerenzer (2018).
Set up a statistical null hypothesis of “no mean difference” or “zero correlation.” Don’t specify the predictions of your research hypothesis or of any alternative substantive hypotheses.
Use 5% as a convention for rejecting the null. If significant, accept your research hypothesis. Report the result as p < 0.05, p < 0.01, or p < 0.001 (whichever comes next to the obtained p-value).
Always perform this procedure.
The null ritual would have been rejected by the statisticians it is attributed to.
Set up a statistical null hypothesis. The null need not be a nil hypothesis (i.e., zero difference).
Report the exact level of significance (e.g., p = 0.051 or p = 0.049). Do not use a conventional 5% level, and do not talk about accepting or rejecting hypotheses.
Use this procedure only if you know very little about the problem at hand.