Have you heard about that experimental psychologist? He decided that his participants did not exist, because the probability of selecting them, assuming they exist, was very small indeed (p < .001). Fortunately, his colleagues were quick to reply that he was mistaken. He should decide that they do exist, because the probability of selecting them, assuming they do not exist, is very remote (p < .001). Yes, even unfunny jokes can be telling about the silliness of significance testing.
But sometimes the silliness is more subtle, for instance in a recent blog post by Daniel Lakens, the 20% Statistician with the title "Why Type I errors are more important than Type 2 errors (if you care about evidence)." The logic of his post is so confused, that I really do not know where to begin. So, I will aim at his main conclusion that type I error inflation quickly destroys the evidence in your data.
(Note: this post uses mathjax and I've found out that this does not really work well on a (well, my) mobile device. It's pretty much unreadable).
Lakens seems to believe that the long term error probabilities associated with decision procedures, has something to do with the actual evidence in your data. What he basically does is define evidence as the ratio of power to size (i.e. the probability of a type I error), it's basically a form of the positive likelihood ratio $$PLR = \frac{1 - \beta}{\alpha},$$ which makes it plainly obvious that manipulating $\alpha$ (for instance by multiplying it with some constant c) influences the PLR more than manipulating $\beta$ by the same amount. So, his definition of "evidence" makes part of his conclusion true, by definition: $\alpha$ has more influence on the PLR than $\beta$, But it is silly to reason on the basis of this that the type I error rate destroys the evidence in your data.
The point is that $\alpha$ and $\beta$ (or the probabilities of type I errors and type II errors) have nothing to say about the actual evidence in your data. To be sure, if you commit one of these errors, it is the data (in NHST combined with arbitrary i,e, unjustified cut-offs) that lead you to these errors. Thus, even $\alpha = .01$ and $\beta = .01$, do not guarantee that actual data lead to a correct decision.
Part of the problem is that Lakens confuses evidence and decisions, which is a very common confusion in NHST practice. But, deciding to reject a null-hypothesis, is not the same as having evidence against it (there is this thing called type I error). It seems that NHST-ers and NHST apologists find this very very hard to understand. As my grandmother used to say: deciding that something is true, does not make it true
I will try to make plausible that decisions are not evidence (see also my previous post here). This should be enough to show you that error probabilities associated with the decision procedure tells you nothing about the actual evidence in your data. In other words, this should be enough to convince you that Type 1 error rate inflation does not destroy the evidence in your data, contrary to the 20% Statistician's conclusion.
Let us consider whether the frequency of correct (or false) decisions is related to the evidence in the data. Suppose I tell you that I have a Baloney Detection Kit (based for example on the baloney detection kit at skeptic.com) and suppose I tell you that according to my Baloney Detection Kit the 20% Statistician's post is, well, Baloney. Indeed, the quantitative measure (amount of Baloneyness) I use to make the decision is well above the critical value. I am pretty confident about my decision to categorize the post as Baloney as well, because my decision procedure rarely leads to incorrect decisions. The probability that I decide that something is Baloney when it is not is only $\alpha = .01$ and the probability that I decide that something is not-Baloney when it is in fact Baloney is only 1% as well ($\beta = .01$).
Now, the 20% Statistician's conclusion states that manipulating $\alpha$, for instance by setting $\alpha = .10$ destroys the evidence in my data. Let's see. The evidence in my data is of course the amount of Baloneyness of the post. (Suppose my evidence is that the post contains 8 dubious claims). How does setting $\alpha$ have any influence on the amount of Baloneyness? The only thing setting $\alpha$ does is influence the frequency of incorrect decisions to call something Baloney when it is not. No matter what value of $\alpha$ (or $\beta$, for that matter) we use, the amount of Baloneyness in this particular post (i.e. the evidence in the data) is 8 dubious claims.
To be sure, if you tell the 20% Statistician that his post is Baloney, he will almost certainly not ask you how many times you are right and wrong on the long run (characteristics of the decision procedure), he will want to see your evidence. Likewise, he will probably not argue that your decision procedure is inadequate for the task at hand (maybe it is applicable to science only and not to non-scientific blog posts), but he will argue about the evidence (maybe by simply deciding (!) that what you are saying is wrong; or by claiming that the post does not contain 8 dubious claims, but only 7).
The point is, of course, this: the long term error probabilities $\alpha$ and $\beta$ associated with the decision procedure, have no influence on the actual evidence in your data. The conclusion of the 20% Statistician is simply wrong. Type I error inflation does not destroy the evidence in your data, nor does type II error inflation.
No comments:
Post a Comment