I am not a fan of NHST (Null Hypothesis Significance Testing). Or maybe I should say, I am no longer a fan. I used to believe that rejecting null-hypotheses of zero differences based on the p-value was the proper way of gathering evidence for my substantive hypotheses. And the evidential nature of the p-value seemed so obvious to me, that I frequently got angry when encountering what I believed were incorrect p-values, reasoning that if the p-value is incorrect, so must be the evidence in support of the substantive hypothesis.
For this reason, I refused to use the significance tests that were most frequently used in my field, i.e. performing a by-subjects analysis and a by-item analysis and concluding the existence of an effect if both are significant, because the by-subjects analyses in particular regularly leads to p-values that are too low, which leads to believing you have evidence while you really don't. And so I spent a huge amount of time, coming from almost no statistical background - I followed no more than a few introductory statistics courses - , mastering mixed model ANOVA and hierarchical linear modelling (up to a reasonable degree; i.e. being able to get p-values for several experimental designs). Because these techniques, so I believed, gave me correct p-values. At the moment, this all seems rather silly to me.
I still have some NHST unlearning to do. For example, I frequently catch myself looking at a 95% confidence interval to see whether zero is inside or outside the interval, and actually feeling happy when zero lies outside it (this happens when the result is statistically significant). Apparently, traces of NHST are strongly embedded in my thinking. I still have to tell myself not to be silly, so to say.
One reason for writing this blog is to sharpen my thinking about NHST and trying to figure out new and comprehensible ways of explaining to students and researchers why they should be vary careful in considering NHST as the sine qua non of research. Of course, if you really want to make your reasoning clear, one of the first things you should do is define the concepts you're reasoning about. The purpose of this post is therefore to make clear what my "definition" of NHST is.
My view of NHST is very much based on how Gigerenzer et al. (1989) describe it:
"Fisher's theory of significance testing, which was historically first, was merged with concepts from the Neyman-Pearson theory and taught as "statistics" per se. We call this compromise the "hybrid theory" of statistical inference, and it goes without saying the neither Fisher nor Neyman and Pearson would have looked with favor on this offspring of their forced marriage." (p. 123, italics in original).
Actually, Fisher's significance testing and Neyman-Pearson's hypothesis testing are fundamentally incompatible (I will come back to this later), but almost no texts explaining statistics to psychologists "presented Neyman and Pearson's theory as an alternative to Fisher's, still less as a competing theory. The great mass of texts tried to fuse the controversial ideas into some hybrid statistical theory, as described in section 3.4. Of course, this meant doing the impossible." (p. 219, italics in original).
So, NHST is an impossible, as in logically incoherent, "statistical theory", because it (con)fuses concepts from incompatible statistical theories. If this is true, which I think it is, doing science with a small s, which involves logical thinking, disqualifies NHST as a main means of statistical inference. But let me write a little bit more about Fisher's ideas and those of Neyman and Pearson, to explain the illogic of NHST.